Practising my way to being a better teacher

Reading Harry Fletcher-Wood’s latest blog in his series on practice based development, I have come to reflect on how I can sharpen my own professional development.

Harry is right to suggest that practise is an extremely effective route to improving professional practice. My experience of teacher training and CPD sessions earlier in my career had been loaded full of ideas. There were card sorts of activities to try out, discussions around a range of different elements of teaching practice. In each of these sessions, I left with an unfortunate mix of feeling overwhelmed and disinterested. Such a range of strategies left me with a sense that I had a lot to achieve. Yet, with so many ideas discussed in such a superficial level of depth, no sense of how I might act upon them. So, I might have had ambitions to try some of the ideas out, I would file the wealth of resources in a nice “CPD” folder, which lived at the back of my classroom until there was a new packet of documents to add to it.

Teaching is all about habits and habit formation. When the lessons are coming thick and fast, we fall back on our instincts. These are entrenched, and driven by the values we hold as teachers. Far better then,  to focus on just one element of our practice to, and work at this until improvement comes. An in depth discussion of one idea, and deliberately practising its use, is far more likely to have success than a scattergun approach. I have been fortunate that my school has been on a ‘CPD journey’ in this direction in recent years. Teachers have signed up to ‘professional learning groups’, each with a particular focus such as ‘teaching for memory’, where precise strategies are discussed, related to our vision of what we want students to achieve. In a supportive environment, possible strategies are shared, dissected and amended for our own subject disciplines. These are then rehearsed in the classroom, and discussed at greater length in a follow up session to consider their effectiveness.

This has taken me some way. Harry’s blogs have provided me with some further reflections. In combination with my recent MA dissertation, I have come to see this practice based vision as a little front-loaded. I need to think more deliberately about the practise I intend to engage in. So far, the implementation of my habits has been too sketchy. Too reliant on external stimuli. I have been proficient in selecting areas of my practice requiring development, and prescribing a solution. Inevitably, school CPD sessions are too far apart to keep any sense of momentum going.

Harry’s latest blog includes a model from Brent Maddin at the TeacherSquared Teacher Institute that has encouraged me to think more carefully about how I deliberately plan my practise. So far, my planning has been ‘do something’ and ‘have something to discuss’ in the feedback CPD session. This week, my school launched ‘CPD seminars’, evolving the PLG (professional learning group) model discussed above. Colleagues read a small extract from Making Every Lesson Count, evaluated some of its ideas, and considered what it suggested about where we might be able to improve our practice. I selected the idea that students need to ‘layer up’ their writing as a goal to work on, which may well be worth discussing at greater length in a post of its own.

What I do need to do, on this occasion, is be more precise about how I intend to implement this strategy, and the evidence I intend to elicit. Ordinarily, the evidence might take the form of a the extent to which students’ essays improved. My goal is to get students to review their written work, and to change their habits, to see written work as ongoing rather than complete the moment a sufficient volume of work is on the page. This is still true. However, I need to think about what evidence is there that my habits have changed.

I have considered some potential options, which a more proficient teacher-trainer may well be able to expand upon:

  • Inviting colleagues or leadership into the classroom to feedback on the language I use when discussing written work.
  • Recording extracts of lessons to review the language and tasks used.
  • Work scrutiny.
  • A termly review of my teacher planner to review whether my homework & classroom tasks are consistent with the layering process I am keen to develop.
  • Fixing interviews in the near, medium and long term future to discuss with students how they view written work. Presumably if their attitudes have changed over time, I am being consistent in my newly adopted approach.

 

It might be particularly useful to have colleagues observe portions of my lessons, to consider the extent to which my teaching practice demonstrates that my habits & classroom instincts have chaged. However, this seems problematic. This places demands on stretched colleagues and there is always going to be a strong temptation to ‘deliver what they’re looking for’. Instead, I’m going to specify the precise changes I want to notice in my practice, and collect evidence as an individual. This seems to be the only sustainable way to develop large numbers of teachers. In this matter, I am accountable to myself. The presence (or absence) of a future post evaluating these changes in my teaching practice will be clue enough as to whether practising practise has worked!

Advertisements

Can you instinctively know the grade of a piece of work?

I have been thinking about how we might persuade colleagues to revisit their initial approaches to assessment at work. It is only once we truly understand the ingrained habits and assumptions that one can begin to encourage a genuine reorientation. I am also conscious that recent education debate is dominated by a so-called ‘progressive’ versus ‘neo-traditionalist’ dichotomy. Believing, habitually, that where two opposed positions are set out, the best approaches usually emerge from the grey area between, I want to better understand teacher’s responses to ‘new’ assessment theories to critically interrogate them and ensure that we are moving in the right direction.

My titular question is derived from a discussion with a senior colleague who had been raising questions of assessment with the English and Maths faculties. He mused that introducing new ideas about assessment might be a challenge, given an engrained belief that staff could instinctively feel the grade of a piece of work. This coincided with two separate colleagues asking me, last week, what grade they thought a piece of work: one was a history essay, another an Extended Project. This suggested to me that such a belief spread further across the school, and I’d expect across the profession more widely. You only have to look at lesson activities that offer students grade based learning objectives as evidence of this.

I wasn’t sure of what to make of this proposition. It seemed logical, on the face of it. Could colleagues, who have been in the profession for years, if not decades (as most of mine have), not build up a reliable method of applying grade based descriptors from the sheer volume of exemplars they had seen in their careers? This seems highly probable, in the case of Maths, where a quality model of progression prevails. Questions can be ‘grade seven’ or ‘grade nine’ skill-levels in Maths. This is not so in the humanities or perhaps in English.

Still, surely a teacher could look at enough essays to sense which engage in the analysis of sufficient complexity to merit a higher grade than another? My gut instinct is that teachers could do this. In essence, teachers build in certain expectations, certain criteria into their minds to operate some form of comparative judgement. But is it desirable? I’m not sure, but the issues seem to be as follows:

  • Colleagues who believe they can instinctively apply a grade to a piece of work are typically experienced and in positions of middle management. This is not the case, necessarily, for those in their department who are perhaps considerably less experienced. Middle managers need to provide the structures to enable these colleagues to assess work accurately, and ascertain its particular quality relative to the rest of the class, year-group or cohort.
  • However experienced a teaching colleague might be, the new 9-1 specifications are radically different to their predecessors. This might be true of some subjects, more than others, but as a ‘Modern World historian’ the new rubrics represent a genuine revolution in both content and assessment structures. Teachers are likely to have mastered the progression model, but clearly they cannot assess work against externally set grades at this stage in time.
  • For schools and departments to generate data to plan teaching and interventions, summative judgements about students’ work needs to be reliably produced. I mean this in the sense that even if every ‘dart thrown’ is wide of the bullseye, it should be wide by the same amount. This is more likely to be the case with work that is assessed by a department, rather than individual teachers, and this is best facilitated with methods such as comparative judgement, rather than teachers awarding grades against an internal gut judgement and/or level based criteria.
  • Even if you could instinctively suggest that an extended piece of writing, for example, was a particular grade, is this assessment part of a broader set of structures that allows for a skill-level and content-level analysis of a pupil’s knowledge that is updated in real time. The belief that you can give a piece of work a grade, the task in question must be of the sort that matches the final exam. It suggests a teaching approach that is not breaking down the skills and knowledge required and building these up, gradually, over time. Instead, students are being subjected to terminal exam style questions, which Christodoulou has demonstrated to us distorts the teaching process and is inherently inaccurate where teachers begin to ‘teach to the test.’

I’m also left with a desire to swing at the biggest assumption of all laying under the titular question. Why would you ever want to generate a grade against a student? Is it even necessary? Questions for those smarter, and more important, than myself.

 

Colourful Comparative Judgement

Further Refinements Using Comparative Judgement

I recently wrote about my experimentations this year with Comparative Judgement, which are worth a read here. I spoke of making a further refinement in practice to make it easier to use with my sixth form essays.

Now that there are no longer AS exams to generate reliable data on our students,  we have conducted a second wave of formal assessments with our students , as a series of ‘end of year’ exams. A source essay was set for the AQA ‘2M Wars and Welfare’ unit relating to the General Strike, and students essays were scanned in to be judged comparatively. Comparative Judgement is supposed to be quick. Dr Chris Wheadon has remarked that it should be possible to make reliable judgements on which essay is best within thirty seconds. However, we were finding it difficult to do this. There are several points of comparison to make, and in previous rounds of comparing essays, it was difficult to determine which essay was best when, for example, one essay had made strong use of own knowledge to evaluate the accuracy of claims made in a source but another had a robust and rigorous dissection of the provenance of the source. Therefore, we decided to mark up the essays by highlighting the following ‘key elements’, that we determined were essential to determining the quality of the essay:

  • Use of precise own knowledge, integrated with the source content.
  • Comments relating to the provenance of the source.
  • Comments relating to the tone of the source & how this affects source utility.

 

This led to a discussion of how this could be practically be used when making a comparison. We determined, first of all, that when essays might initially appear equal, but where one did not have an even coverage across all three areas, the one that had a broader coverage would be determined best. In theory, we would have been doing this anyway, but marking up the essays beforehand made this significantly easier to spot and therefore judge.

We were also able to resolve other tensions too, when making judgements. It became clear that all of our students had deployed an excellent range of knowledge when determining the accuracy and validity of the arguments made by the sources. It was therefore easier to precisely compare the quality of students’ evaluation of the provenance of each source, having a visual guide of which parts of the lengthy essays to read.

The use of colour was therefore valuable in supporting us in extracting some general strengths & weaknesses of essays across the set. The significant points of comparison were clearer to spot and there were things we were looking for, when making a judgement, which were not coming up, i.e. precise comments on the purpose of the source. It also emerged, that we were not highlighting something of great importance to us; arguments sharply judging the value of the source. Our students were excellent at ripping apart the sources, and commenting on their accuracy and reliability and so on, but were not using these ideas to craft arguments about the utility of the sources to a historian. In essence, some were not answering the question. Some were, and some were not. This gave rise to a feedback task, where students were invited to either pick out of their essays for themselves where these arguments were, or to re-draft passages to insert such arguments.

Impact on the Students

Students also responded extremely positively to the colour coding of their essays. From initial intrigue about what each of the colours represented, once a key had been discerned, they were alive with discussions about what constituted a quality discussion about the value of the provenance of the source. Immediately, students were responding to comments about the structure and balance of their essays. Without prompting, they were looking across their tables to see if there were strong examples that they could look at, to support them with their own development.

This is certainly what we would want to see from sixth form students. By guiding students towards recognising certain parts of their essays, we had unlocked the potential for them to act as independent learners. This was in marked contrast to previous attempts at delivering feedback, where I had more generically suggested that they read through their essays themselves, and look to reconcile the feedback that had been given to what they had actually written. In over five sides of writing, this isn’t particularly helpful. Instead, individual students were focussing on what made their essays unique. Meanwhile, conversations about how limited students’ discussions were on the purpose and provenance of the sources took on new meaning for students, as they could very quickly see how far their writing differed from what I was suggesting had been required. As suggested, I was most pleased with students debating what they should really have said was. Some students challenged my highlighting. One high attaining student in particular instantly recognised that she had not discussed the provenance of the two sources, at all, but queried whether some of her remarks might have qualified. This led to a meaningful dialogue of why some of her suggestions did not ‘count’ as such. She immediately amended her answer to include some more relevant points, and her understanding of what it means to dissect the provenance of the source had been enhanced. The feedback appeared to be doing its job. However, only the next source essay can start to assess how far this assertion is true.

 

Making Good Progress

How can we put the lessons from Making Good Progress into practice?

I had originally intended this to be a follow up to my two blogs reviewing the new Robert Peal series of textbooks. However, I think the ideas contained in Daisy Christodoulou’s book demonstrate weaknesses with the design of most school’s assessment models and require application far more widely. There has been a refreshing focus on models and theories of assessment in education discourse recently. However, it has only served to depress me, that we’re doing it wrong! Time for some optimism, and to start thinking about what the next steps are to accurately assessing our pupils’ work.

You would need to read the book in full, of course, to see Daisy’s evidence base and full analysis of the problems with assessment in schools. I have written a particularly thorough summary of Daisy’s book that I would be keen to discuss with anyone should they wish to get in touch: it is a powerpoint slide summary for each chapter. However, I would suggest that Daisy’s unique contributions and her most important ideas are as follows:

  • Descriptor-led assessments are unreliable in getting an accurate idea of the quality of a piece of work.
  • Assessment grades are supposed to have a ‘shared meaning’. We need to be able to make reliable inferences from assessment grades. This is not the case if we simply aggregate levels applied to work in lessons, or to ‘end of topic’ pieces of work, and then report these aggregate grades. Daisy calls this banking, where students get the credit for learning something in the short-run but we do not know if it has stuck over time. I would suggest this is one of our biggest flaws, as teachers. We test learning too soon, rather than looking for a change in long-term thinking.
  • Summative assessments need to be strategically designed. We cannot use formative and summative assessments for the same task. Instead, we need to design a ‘summative assessment’ as the end goal. The final task, for example a GCSE exam question, needs to be broken down as finely as possible into its constituent knowledge and skill requirements. These then need to be built up over time, and assessed in a formative style, in a fashion that gives students opportunities for deliberate practice, and to attempt particular tasks again.

 

What Daisy proposes as a solution is an integrated model of assessment. A model which takes into account the differences between formative and summative assessments, and where every assessment is designed with reference to its ultimate purpose. What this looks like would be:

  • Formative assessments which are “specific, frequent, repetitive and recorded as raw marks.”
    • These would be regular tests, likely multiple-choice questions, where all students are supposed to get high marks and marks are unlikely to be recorded. Recording marks starts to blur the lines between formative assessment and summative assessment.
  • Summative assessments which are standard tests taken in standard conditions, sample a large domain and distinguish between pupils. They would also be infrequent: one term of work is not a wide enough domain to reliably assess.
    • For ‘quality model’ of assessments, such as English and the Humanities, these can be made particularly reliable through the use of comparative judgement. You could, and should, read more about it here. Daisy also suggests that we should use scaled scores, generated through nationally standardised assessments or comparative judgement. This would have the advantage of providing scores that could be compared across years, and class-averages can provide valuable data to evaluate the efficacy of teaching. I must confess that I need to understand the construction of ‘scaled scores’ more before I can meaningfully apply this information to my teaching practice. I would welcome the suggestion of a useful primer.

 

I’m starting to think about how I could meaningfully apply these lessons to a history department. Daisy suggests that the starting point is to have an effective understanding of the progression model. I think this is something that the history teaching community is already strong on, though the model remains contested which is no bad thing. However, the lack of standardisation across the history teaching community means we are unlikely to build up a bank of standardised summative assessments which we could use to meaningfully compare pupils’ work across schools, to diagnose weaknesses with our own students’ performance. This is something for academy chains and the Historical Association to perhaps tackle. I might be wrong, but I think this is something PiXL seem to be doing in Maths, and Dr Chris Wheadon is setting the foundations for in English. This isn’t something that can be designed at the individual department level.

Where teachers can more easily work together is on the construction of a “formative item bank”. This would consist of a series of multiple-choice questions that will expose students’ thinking on a topic, tease out misconceptions, and judge understanding. Invariably, students’ conceptual thinking in history is undermined by a lack of substantive knowledge. Only once teachers undertake this task, which surely must be a collective effort, can we discern the extent to which this style of formative assessment can detect first and second-order knowledge. Some adaptations might be required. We can then integrate this formative assessment with an appropriate model of summative assessments where the power of collective action on the part of history teachers will undoubtedly be even greater.

I shall therefore spend my holidays thinking about, among other things, what the first steps I need to take as a teacher are to develop such a bank of formative material, and how I would need to shape the structure of summative assessments across the various Key Stages. I intend to write more on this subject. I think it is at the very core of ensuring that we maximise the potential of the new knowledge-rich curriculums many are advocating. Of what use is such a curriculum if we do not have an accurate understanding of how far students are grasping its material?

 

‘Pealite Planning’ Part Two

A review of the textbook in light of the associated scheme of work and resources. Part one of the review can be found here: click.

I must point out, before any further critique, that Robert Peal has been extremely generous in sharing his resources and schemes of work. These are an excellent, and helpful contribution, of resources, which must be utilised critically and judiciously.

In my first post, I was critical of the comprehension style questions, and how they do not encourage students to think hard about the material. Peal does go somewhat further with his schemes of work, where each lesson of reading from the book is followed up with some written tasks. These questions, such as ‘what can a historian learn about the response to the Gunpowder Plot from a Dutch engraving?’ are likely to provoke deeper thinking and the writing process itself, I’d contend, also encourages students to ‘do something’ with the information.

These written tasks are, as this question implies, often linked to a discussion surrounding historical sources. These sources are grounded in discussions of their role and purpose in learning about the period and how they might be of value to historians. As such, Peal’s schemes of work imply that there is some engagement with the concept of using historical evidence, and this is the start of students beginning to consider how our historical knowledge can only be provisional in nature. One would have to be in the lessons to see how far Peal develops these ideas, but there is no reason why Peal’s resources should not lead to individual teacher’s doing so in their own classrooms.

Collins has also published free ‘teacher guides’ to accompany each of the textbooks. Within these, teachers are directed towards “thinking deeper” questions. These should be for all students, but at least these too encourage students to work up their historical knowledge from the raw chronicle they are provided with in the textbooks.

What troubled me here though, in these teacher guides, were the “suggested activities” to accompany each lesson. Take these two activities, which accompany the lesson on James I and the Gunpowder Plot, as an example:

  • Complete a storyboard of the Gunpowder Plot, giving an illustrated narrative of the series of events.
  • Further research the claims some people have made that the Gunpowder Plot was – to some extent – a hoax, and debate whether this could or could not be true.

 

I was surprised to see these tasks, from a knowledge-rich, anti-progressive teacher such as Peal. The idea of “complete a storyboard” isn’t particularly historical, and I’m not sure how far it is going to encourage students to really probe the significance of the gunpowder plot. I think this speaks to the lack of a genuine progression model across the textbook series. Whilst this lesson hangs under the banner of a chapter on the English Civil War, there’s no connection between the reigns of James I and Charles I. Here, we have missed a trick, and we’re lacking historical depth. It is also worth mentioning that Louis Everett didn’t mention that any further activities were a regular feature of the “reading” lessons during his presentation at the West London Free School history conference.

Peal is also very strong here on directing students towards more precise sources, and there are links to tasks where students are encouraged to read more academic history. Teachers will want to take the very clearly referenced materials and integrate them into a curriculum model with greater coherence. I have generated the impression, from the range of examples provided in the schemes of work, and from the conference, that the works of historians are primarily confined to homework reading. There is another missed trick here, in that it would perhaps be valuable to integrate historical debates with the lesson materials. I have written on the merits of doing so elsewhere on this blog, and one suspects this might also go some way towards helping students to understand that the textbook provides an interpretation rather than the interpretation.

In short, the schemes of work and Peal’s broader range of resources merit further exploration and any teacher looking at the textbooks must combine the two. However, there remain limitations to this package which will be addressed in one final post.

‘Pealite Planning’ Part One

A review of Robert Peal’s textbook series

I have seen some glowing reviews on Twitter and some strong criticism of Robert Peal’s ‘knowing history’ series of textbooks. I have used these books to plug a few gaps in my teaching, since attending the West London Free School history conference. However, it was when planning a series of lessons, namely on the causes of the English Civil War, that I felt genuinely motivated to write up my experiences of using his books. They have their strengths, and I think they do a job well. Criticism needs to be recalibrated, and more properly set against their strengths. I aim to address the quality of the books in this blog post, and will follow up with how my views are enhanced when they are put in the context of Peal’s schemes of work before finally considering gaps within this bigger picture.

The Books

My initial reaction to the books was, and remains, that they are excellent. I have been looking for a solid textbook that offers me a ‘lump of text’ to work with for quite some time. My deputy head, and fellow history teacher, even took a tour in our 1970s archive to find materials that offered us a knowledge-rich source of information to base our lessons off of. For this, they must be applauded. Too many textbooks lack this. They are crowded with sources, which are impossible to weave into the text and certainly do not enhance students’ understanding of history as a discipline. I am thinking, in particular here, of the various SHP books such as Contrasts and Connections.

My reservations about the books echo some of Alex Ford’s concerns. The language is ambitious. Too ambitious. I agree that it should be ambitious, and pitched high. However, I teach in a grammar school and, in places, the students are needing to refer to the knowledge organiser and a dictionary too often and it distracts from the overall narrative.

I imagine Peal might counter with the suggestion that students can still grasp the true narrative and the core of events whilst reading aloud. While this might help students to navigate the trickier language, I’m not a fan of reading aloud. I’ve never been convinced that it helps students to internalise the narrative, and to think about it.  David Didau has explained that getting students to follow along with a text, while it is read aloud, can be problematic. While Didau argues that reading aloud does aid comprehension for students with weaker literacy, once can’t help but suspect they would be overwhelmed with the difficulty of these texts. It would be interesting to see some research on this, and how much unfamiliar vocabulary can be navigated within a piece of text before students’ working memories are overloaded.

One of my other reservations was the interpretations that are offered within the text. As suggested above, I was teaching a series of lessons on the causes of the English Civil War. There isn’t any context on how Charles I’s problems with parliament can be traced back to James I’s relationship. In fact, James I only gets a mention in the book as part of the Gunpowder Plot. This disappointed me, and still left me needing to supplement the materials in the book with my own resources.

This isn’t a problem. But, where teachers are claiming that they can plan very quickly, and the books are being advertised as knowing history, the fact that we’re not getting the full story or a sense that it is just that, an interpretation which lays most of the problems at Charles I’s door, this is slightly problematic. The problem here lay more with the teachers claiming to use the book. I feel that the books have a lot to offer, if they’re used in a critical way, with teachers who don’t outsource their planning. Their value is only enhanced by Peal’s generous contribution of resources on his website, which I shall address in my next post.

My other issue relates to the ‘comprehension questions’. Asking five comprehension questions at the end of a large block of text is no indication of how far students are able to comprehend the information presented. Nor will it aid memory retention. I’m tempted to start recalling Willingham’s mantra that “memory is the residue of thought” and the questions on each double page spread do not encourage any thought. What I have done, is I have asked students to do a variety of things with the text. The following three are somewhat typical of my practice:

  • Asking students to “reduce” each paragraph to a one sentence summary. This can elicit whether students have understood the most important part of the text.
  • Asking students to “transform” the text into an image, leaning on the idea of “dual coding”.
  • Asking students to “prioritise” the most useful sentence for understanding a particular idea.

I believe this is better than asking comprehension questions, which do not encourage students to actively use the information. Look at this task which shows how easy it is to extract information and answer comprehension questions without having to assemble meaning.

 

 

In my next post I shall address how this can be taken further, in light of Robert Peal’s schemes of work.

 

Thinking Aloud

What do we want students to know about the middle ages?

My review of what I teach about the middle ages continues at a glacial pace. There are so many different angles from which to approach curriculum planning it is hard to settle down and make a start.

Michael Fordham Fordham suggests that one approach might include generating a list of essay style-questions. Perhaps 100 for Key-Stage Three. What might these questions read, uniquely, for the Middle Ages? I concur with Fordham that there needs to be a transition between the Middle Ages and the developing ‘Early Modern’ period post-1485. There might be questions that explicitly refer to this and indeed the full Millennia wide teaching of the key stage. But what of the Middle Ages alone?

My thinking has also been shaped by discussions surrounding ‘fingertip’ knowledge and ‘residual knowledge’. In light of my recent reading of Making Good Progress? the idea of planning for what students must know in the long-run, what we want the residual knowledge to be, seems to be a valuable starting point for planning. This would then need to be integrated with the disciplinary knowledge that students should be expected to pick up through the way that this historical content is taught.

When thrashing out what ‘residual knowledge’ we want pupils to have in the future, we need to have an eye on the next chapter of the story. I remain of the view that pupils need to be able to orient themselves in time, they should have a basic chronological overview in their head. This will involve frequent comparisons between historical periods to pinpoint their unique properties. This would hopefully generate a ‘chronological compass’ for pupils as well as giving some narrative logic to the development of Britain over the last thousand years or so.

With all of these considerations in mind, might I propose the following goals for teaching Medieval history, as something of a starting point. I certainly intend to refine these, and populate them with more specific historical content, that should hopefully fulfil these aims. That is the next question: what knowledge of the middle ages is ‘cumulatively sufficient’ to meet these goals?

 

Some goals for teaching medieval history:

Students should know that:

  • The Church, the King and the Nobility competed for power ‘at the top.’
  • England’s peasants lived complex lives & contested for power themselves
  • Power, wealth and ideas in England were shaped by events outside of her own borders*
  • There are few sources available for life in the Middle Ages. Much information has been extracted from few particular sources.
  • The Middle Ages’ legacy is joined up to events in the Early Modern period and indeed to life today.

*In proposing this, my thinking is shaped very much by Robert Winder’s superb Bloody Foreigners. I am painfully aware that my subject knowledge here is not what it should be. There are, therefore, likely to be gaps which I hope readers will fill.

This knowledge could perhaps be framed in the following essay-style questions:

  1. What was the role and influence of the Church in Medieval England?
    1. To what extent did it change?
  2. Who really governed Medieval England?
  3. How fair is it to speak of a ‘typical peasant experience?’
  4. How do historians know about life in the Middle Ages?
  5. How was England shaped by its invasions?
  6. What event served to change Medieval England the most?
  7. What feature of the Middle Ages has left the strongest legacy?
  8. What marks the Early Modern Period as distinct to the Middle Ages?

 

These questions might then be added to and adapted after considering what events, knowledge and sources are essential and what content we might be more selective about, but might add up to sufficient answers to these questions.

So. Three questions to be getting on with:

  • Is this a valid approach to curriculum design?
  • Are these goals comprehensive and historically valid for teaching the Middle Ages at KS3?
  • What knowledge of the Middle Ages is independently necessary and cumulatively sufficiently to meet these goals?