Making Good Progress

How can we put the lessons from Making Good Progress into practice?

I had originally intended this to be a follow up to my two blogs reviewing the new Robert Peal series of textbooks. However, I think the ideas contained in Daisy Christodoulou’s book demonstrate weaknesses with the design of most school’s assessment models and require application far more widely. There has been a refreshing focus on models and theories of assessment in education discourse recently. However, it has only served to depress me, that we’re doing it wrong! Time for some optimism, and to start thinking about what the next steps are to accurately assessing our pupils’ work.

You would need to read the book in full, of course, to see Daisy’s evidence base and full analysis of the problems with assessment in schools. I have written a particularly thorough summary of Daisy’s book that I would be keen to discuss with anyone should they wish to get in touch: it is a powerpoint slide summary for each chapter. However, I would suggest that Daisy’s unique contributions and her most important ideas are as follows:

  • Descriptor-led assessments are unreliable in getting an accurate idea of the quality of a piece of work.
  • Assessment grades are supposed to have a ‘shared meaning’. We need to be able to make reliable inferences from assessment grades. This is not the case if we simply aggregate levels applied to work in lessons, or to ‘end of topic’ pieces of work, and then report these aggregate grades. Daisy calls this banking, where students get the credit for learning something in the short-run but we do not know if it has stuck over time. I would suggest this is one of our biggest flaws, as teachers. We test learning too soon, rather than looking for a change in long-term thinking.
  • Summative assessments need to be strategically designed. We cannot use formative and summative assessments for the same task. Instead, we need to design a ‘summative assessment’ as the end goal. The final task, for example a GCSE exam question, needs to be broken down as finely as possible into its constituent knowledge and skill requirements. These then need to be built up over time, and assessed in a formative style, in a fashion that gives students opportunities for deliberate practice, and to attempt particular tasks again.

 

What Daisy proposes as a solution is an integrated model of assessment. A model which takes into account the differences between formative and summative assessments, and where every assessment is designed with reference to its ultimate purpose. What this looks like would be:

  • Formative assessments which are “specific, frequent, repetitive and recorded as raw marks.”
    • These would be regular tests, likely multiple-choice questions, where all students are supposed to get high marks and marks are unlikely to be recorded. Recording marks starts to blur the lines between formative assessment and summative assessment.
  • Summative assessments which are standard tests taken in standard conditions, sample a large domain and distinguish between pupils. They would also be infrequent: one term of work is not a wide enough domain to reliably assess.
    • For ‘quality model’ of assessments, such as English and the Humanities, these can be made particularly reliable through the use of comparative judgement. You could, and should, read more about it here. Daisy also suggests that we should use scaled scores, generated through nationally standardised assessments or comparative judgement. This would have the advantage of providing scores that could be compared across years, and class-averages can provide valuable data to evaluate the efficacy of teaching. I must confess that I need to understand the construction of ‘scaled scores’ more before I can meaningfully apply this information to my teaching practice. I would welcome the suggestion of a useful primer.

 

I’m starting to think about how I could meaningfully apply these lessons to a history department. Daisy suggests that the starting point is to have an effective understanding of the progression model. I think this is something that the history teaching community is already strong on, though the model remains contested which is no bad thing. However, the lack of standardisation across the history teaching community means we are unlikely to build up a bank of standardised summative assessments which we could use to meaningfully compare pupils’ work across schools, to diagnose weaknesses with our own students’ performance. This is something for academy chains and the Historical Association to perhaps tackle. I might be wrong, but I think this is something PiXL seem to be doing in Maths, and Dr Chris Wheadon is setting the foundations for in English. This isn’t something that can be designed at the individual department level.

Where teachers can more easily work together is on the construction of a “formative item bank”. This would consist of a series of multiple-choice questions that will expose students’ thinking on a topic, tease out misconceptions, and judge understanding. Invariably, students’ conceptual thinking in history is undermined by a lack of substantive knowledge. Only once teachers undertake this task, which surely must be a collective effort, can we discern the extent to which this style of formative assessment can detect first and second-order knowledge. Some adaptations might be required. We can then integrate this formative assessment with an appropriate model of summative assessments where the power of collective action on the part of history teachers will undoubtedly be even greater.

I shall therefore spend my holidays thinking about, among other things, what the first steps I need to take as a teacher are to develop such a bank of formative material, and how I would need to shape the structure of summative assessments across the various Key Stages. I intend to write more on this subject. I think it is at the very core of ensuring that we maximise the potential of the new knowledge-rich curriculums many are advocating. Of what use is such a curriculum if we do not have an accurate understanding of how far students are grasping its material?

 

Advertisements

‘Pealite Planning’ Part Two

A review of the textbook in light of the associated scheme of work and resources. Part one of the review can be found here: click.

I must point out, before any further critique, that Robert Peal has been extremely generous in sharing his resources and schemes of work. These are an excellent, and helpful contribution, of resources, which must be utilised critically and judiciously.

In my first post, I was critical of the comprehension style questions, and how they do not encourage students to think hard about the material. Peal does go somewhat further with his schemes of work, where each lesson of reading from the book is followed up with some written tasks. These questions, such as ‘what can a historian learn about the response to the Gunpowder Plot from a Dutch engraving?’ are likely to provoke deeper thinking and the writing process itself, I’d contend, also encourages students to ‘do something’ with the information.

These written tasks are, as this question implies, often linked to a discussion surrounding historical sources. These sources are grounded in discussions of their role and purpose in learning about the period and how they might be of value to historians. As such, Peal’s schemes of work imply that there is some engagement with the concept of using historical evidence, and this is the start of students beginning to consider how our historical knowledge can only be provisional in nature. One would have to be in the lessons to see how far Peal develops these ideas, but there is no reason why Peal’s resources should not lead to individual teacher’s doing so in their own classrooms.

Collins has also published free ‘teacher guides’ to accompany each of the textbooks. Within these, teachers are directed towards “thinking deeper” questions. These should be for all students, but at least these too encourage students to work up their historical knowledge from the raw chronicle they are provided with in the textbooks.

What troubled me here though, in these teacher guides, were the “suggested activities” to accompany each lesson. Take these two activities, which accompany the lesson on James I and the Gunpowder Plot, as an example:

  • Complete a storyboard of the Gunpowder Plot, giving an illustrated narrative of the series of events.
  • Further research the claims some people have made that the Gunpowder Plot was – to some extent – a hoax, and debate whether this could or could not be true.

 

I was surprised to see these tasks, from a knowledge-rich, anti-progressive teacher such as Peal. The idea of “complete a storyboard” isn’t particularly historical, and I’m not sure how far it is going to encourage students to really probe the significance of the gunpowder plot. I think this speaks to the lack of a genuine progression model across the textbook series. Whilst this lesson hangs under the banner of a chapter on the English Civil War, there’s no connection between the reigns of James I and Charles I. Here, we have missed a trick, and we’re lacking historical depth. It is also worth mentioning that Louis Everett didn’t mention that any further activities were a regular feature of the “reading” lessons during his presentation at the West London Free School history conference.

Peal is also very strong here on directing students towards more precise sources, and there are links to tasks where students are encouraged to read more academic history. Teachers will want to take the very clearly referenced materials and integrate them into a curriculum model with greater coherence. I have generated the impression, from the range of examples provided in the schemes of work, and from the conference, that the works of historians are primarily confined to homework reading. There is another missed trick here, in that it would perhaps be valuable to integrate historical debates with the lesson materials. I have written on the merits of doing so elsewhere on this blog, and one suspects this might also go some way towards helping students to understand that the textbook provides an interpretation rather than the interpretation.

In short, the schemes of work and Peal’s broader range of resources merit further exploration and any teacher looking at the textbooks must combine the two. However, there remain limitations to this package which will be addressed in one final post.

‘Pealite Planning’ Part One

A review of Robert Peal’s textbook series

I have seen some glowing reviews on Twitter and some strong criticism of Robert Peal’s ‘knowing history’ series of textbooks. I have used these books to plug a few gaps in my teaching, since attending the West London Free School history conference. However, it was when planning a series of lessons, namely on the causes of the English Civil War, that I felt genuinely motivated to write up my experiences of using his books. They have their strengths, and I think they do a job well. Criticism needs to be recalibrated, and more properly set against their strengths. I aim to address the quality of the books in this blog post, and will follow up with how my views are enhanced when they are put in the context of Peal’s schemes of work before finally considering gaps within this bigger picture.

The Books

My initial reaction to the books was, and remains, that they are excellent. I have been looking for a solid textbook that offers me a ‘lump of text’ to work with for quite some time. My deputy head, and fellow history teacher, even took a tour in our 1970s archive to find materials that offered us a knowledge-rich source of information to base our lessons off of. For this, they must be applauded. Too many textbooks lack this. They are crowded with sources, which are impossible to weave into the text and certainly do not enhance students’ understanding of history as a discipline. I am thinking, in particular here, of the various SHP books such as Contrasts and Connections.

My reservations about the books echo some of Alex Ford’s concerns. The language is ambitious. Too ambitious. I agree that it should be ambitious, and pitched high. However, I teach in a grammar school and, in places, the students are needing to refer to the knowledge organiser and a dictionary too often and it distracts from the overall narrative.

I imagine Peal might counter with the suggestion that students can still grasp the true narrative and the core of events whilst reading aloud. While this might help students to navigate the trickier language, I’m not a fan of reading aloud. I’ve never been convinced that it helps students to internalise the narrative, and to think about it.  David Didau has explained that getting students to follow along with a text, while it is read aloud, can be problematic. While Didau argues that reading aloud does aid comprehension for students with weaker literacy, once can’t help but suspect they would be overwhelmed with the difficulty of these texts. It would be interesting to see some research on this, and how much unfamiliar vocabulary can be navigated within a piece of text before students’ working memories are overloaded.

One of my other reservations was the interpretations that are offered within the text. As suggested above, I was teaching a series of lessons on the causes of the English Civil War. There isn’t any context on how Charles I’s problems with parliament can be traced back to James I’s relationship. In fact, James I only gets a mention in the book as part of the Gunpowder Plot. This disappointed me, and still left me needing to supplement the materials in the book with my own resources.

This isn’t a problem. But, where teachers are claiming that they can plan very quickly, and the books are being advertised as knowing history, the fact that we’re not getting the full story or a sense that it is just that, an interpretation which lays most of the problems at Charles I’s door, this is slightly problematic. The problem here lay more with the teachers claiming to use the book. I feel that the books have a lot to offer, if they’re used in a critical way, with teachers who don’t outsource their planning. Their value is only enhanced by Peal’s generous contribution of resources on his website, which I shall address in my next post.

My other issue relates to the ‘comprehension questions’. Asking five comprehension questions at the end of a large block of text is no indication of how far students are able to comprehend the information presented. Nor will it aid memory retention. I’m tempted to start recalling Willingham’s mantra that “memory is the residue of thought” and the questions on each double page spread do not encourage any thought. What I have done, is I have asked students to do a variety of things with the text. The following three are somewhat typical of my practice:

  • Asking students to “reduce” each paragraph to a one sentence summary. This can elicit whether students have understood the most important part of the text.
  • Asking students to “transform” the text into an image, leaning on the idea of “dual coding”.
  • Asking students to “prioritise” the most useful sentence for understanding a particular idea.

I believe this is better than asking comprehension questions, which do not encourage students to actively use the information. Look at this task which shows how easy it is to extract information and answer comprehension questions without having to assemble meaning.

 

 

In my next post I shall address how this can be taken further, in light of Robert Peal’s schemes of work.