Does assessment dictate your curriculum and pedagogy? - Blog 6 - Sixth Form Colleges Association

Does assessment dictate your curriculum and pedagogy?

Back
Does assessment dictate your curriculum and pedagogy?
Date24th Oct 2022AuthorDr Becky AllenCategoriesTeaching

This piece was originally published on Becky's blog here last month under the title 'Which came first: The knowledge architecture beliefs or the assessment?'

Do you write your class assessments to suit your subject’s curriculum and its knowledge architecture? Or do you shape your visions of knowledge architecture to reflect your subject’s assessment tradition?

Classroom practice is often described by educationalists as a three-legged stool comprising curriculum, pedagogy and assessment. Three-legged stools are exceptionally stable, provided each of the three legs is crafted to be strong, robust and exactly the right size in relation to the other two.

Which leg should we set about crafting first? It can feel natural to start with curriculum – our decisions about what counts as valid knowledge – and this is particularly true in an era where attention to curriculum within the accountability system is more prominent than in previous decades.

But is it possible to craft the curriculum leg first, blind to the possibilities of pedagogy and assessment, or is it inevitable that ideas we might already hold about pedagogy and assessment in turn shape our curriculum thinking?

Here is the story of how beliefs about knowledge architecture and optimal assessment brush against each other in two secondary school subjects. In each case, beliefs about knowledge architecture and beliefs about assessment occupy an inter-tangled relationship that is hard to pull apart.

The unreliable history essay

In the New Statesman last week, Professor Rob Coe made the case for assessment reform to improve the reliability of A-Level marking, which is terrifyingly low in subjects like history. Marker consistency is low because the open-ended essays are marked using loosely specified rubrics which leave considerable room for human interpretation. This is not a good situation where a one-grade drop can lose you a university place. So, should we change assessments using tighter rubrics or different question types to give students grades we have more confidence in?

In a 2015 blog, Heather Fearn (ex-history teacher now at Ofsted) argues that we should be wary of improving the reliability of the history exam because it may come at the expense of validity. For example, if we are to define the rubric around an essay more tightly, this requires us to define ‘correct’ and ‘incorrect’ ways of conceptualising historical issues. Thus, we may mark reliably, but the grades may not identify the truly strongest historians in a valid way.

Validity matters, but I’d argue we should be even more concerned about the effect of introducing short answer questions or tighter rubrics on our beliefs about history’s knowledge architecture and its desired curriculum structure. Once this happens, teachers’ beliefs about what it means to think with a historical lens can quickly start to shift. And once this happens, the curriculum and pedagogy legs of the stool start getting re-shaped.

Science and its short answer question

I thought the view that school scientific knowledge was best tested using short answer questions was uncontroversial. Adam Robbins told me this was how science assessment was done on our five-hour roundtrip to researchED Leics. Pieces of scientific knowledge have well-defined connections to other pieces of scientific knowledge, so we can use fairly short questions to pull out these pieces of knowledge and their connections and check students know them. I think this view is nicely encapsulated in this post by Adam Boxer.

Who would disagree with this? Well, it turns out this is not the accepted wisdom amongst science teachers who are teaching the International Baccalaureate. I sought out several of them over the summer and they told me that switching to the IB curriculum with its extended answer questions has transformed their perspective on how meaning-making takes place amongst science students.

One of these IB teachers, Christian Moore Anderson, has been writing a series of fascinating posts on the issue that will interest curriculum thinkers across all disciplines. Start with this post about the shortcomings of inference via short answer questions and find further links from there.

He argues that short answer questions are useful, but that they fragment knowledge and so perform poorly at showing interconnected thinking. Furthermore, they can make it difficult for the teacher to distinguish between verbatim memorisation and true understanding. The sum of the marks from short answer questions does not equate to performance on open-ended questions because they test different types of knowing.

At the heart of this debate is not assessment but rather a complex debate about the nature of knowledge architecture in science. Can a schema be written that fully encapsulates knowledge and the connections that need to be learnt? What is meaning-making and understanding in science? Is scientific understanding best encapsulated as a change in long-term memory or as a change in a way of seeing?

I am not a science teacher, so it isn’t for me to comment on the strengths of his worries about the orthodoxy in cognitive science and the short answer question. However, I do think it is an interesting example of how views about optimal assessment approaches and beliefs about knowledge architecture are inextricably linked. And more importantly, the comments of the IB teachers who had previously taught GCSE and A level suggest that once you encounter a new assessment framework, it slowly starts shifting your beliefs about knowledge architecture and optimal pedagogy.

Balancing the stool

I see these types of debates emerging in different forms in all subjects. Do these debates about assessment approaches matter to student learning? (Admittedly limited) research suggests they do because we shape our study approaches in relation to the nature of the assessment we believe we are working towards (thisis a nice, albeit small study).

It would be simpler if we could defer choices about assessment until after the curriculum and pedagogy had been planned.

It would be simpler if the assessments we use did not in turn influence choices about classroom instruction, knowledge architecture, motivation to study, decisions about how to study, and on.

But this isn’t how the three-legged stool is crafted. Each leg is finely tuned to sit alongside the others. Alter one leg and you will find yourself having to re-work the others.

Think hard about the origins of your beliefs about assessment. And then think harder about how those beliefs about assessment are in turn shaping your beliefs about everything else.

Becky Allen is an academic and education commentator. She is currently a professor at the University of Brighton, and also works at Teacher Tapp, which she co-founded.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×