The Limits of Educational Research

This has been quite the week for evidence in the education industry. The Chartered College of Teaching took the time to pledge to use evidence-based practice, flipped learning has apparently been exposed as not being value-for money, and now lesson observations have been confidently declared pointless. The final one in this list will perhaps cause the most trouble in the English school system given the amount of time teachers spend observing each other.

Here are is the claim made by the EEF last week:

“Increasing structured teacher observations makes no difference to GCSE English and maths results” (Press Release)

Sadly, the way in which this research evidence is being reported will at best lead to considerable confusion and at worst could do real damage in schools. Yes, the EEF wants to take research ‘off the self’ and into the classroom but if this leads to badly informed practitioners it might as well have stayed on the shelf in the first place. This research does not show that lesson observations are ineffective; it shows that increasing the number of lesson observations structured in a particular way is unlikely to lead to an increase in a particular set of GCSE results.

Are lesson observations now pointless?

The studies the EEF released last week produced a series of incendiary headlines such as Lesson observations ‘make no difference to pupils’ results.’ Schools leaders need to be careful, however, and look beyond the headline. Whilst the research benefited hugely from an impressive scale- running in over 82 schools- that seems to have led to decidedly patchy implementation.

Importantly, the teachers don’t seem to have actually implemented the programme of structured observations that the EEF was seeking to measure. This isn’t a small problem to be made as an aside in the findings sections; this is pretty fundamental and needs to be highlighted. The required number of lesson observations did not take place in many of the schools, feedback sessions were described in some case as non-existent and in others as ‘informal’, the training was a day long with no follow-up noted, and the observations seemed to have no particular focus. In fact, the training made clear that there “was no requirement for post-observation discussion to take place as part of the intervention.” I don’t think there will be a school leader in the country that needs an expensive piece of research done to find out that if you don’t give someone feedback after an observation then the process is unlikely to impact student results; how is the practitioner going to improve? We also don’t need a randomised control trial with thousands of participants to discover that CPD in which there is only one day of training is unlikely to lead to meaningful implementation.

As for the highly contentious ‘value for money’ assessment that comes with each Toolkit study, the Education Endowment’s claims regarding the expense of Teacher Observation are perhaps the most galling. Thankful School’s Week highlighted ‘most of the money was spent on software and iPads to record observations and training.’ This is not necessary, at all, for effective lesson observations. EEF studies, notably the flipped learning one also released last week, seem to utilise very expensive forms of the interventions in question with high value software and take home laptops the necessity of which are rarely assessed.

When making decisions we need to be frank about the limits of educational research

This brings me back to the Chartered College of Teaching’s pledge to upholding ‘evidence-based practice’ in education. Lesson observations may work in some formats and they may be more effective in particular contexts- when a school is seeking rapid improvement, for example. But, we must remember that, at heart, education is far from a science. The rise of large randomised control trials has fed the lie that the only appropriate measure of an educational intervention’s value is its impact GCSE results. This is absurd. The problems with this exemplified best my favourite EEF claim there is no evidence that new buildings or particular aspects of architecture directly improve learning,” which is the kind of thinking that lead to the defunding of Building Schools for the Future for over 700 schools in dire need of improvement. Sure, a new building might not change my GCSE grade, but it might be nice not to work in a building that fits all my fellow students and doesn’t have a defunct heating system. Decisions in schools don’t just impact students’ grades, they impact hours of their childhood.

Educational research has rightfully improved its stature in schools. However, all research is limited by feasibility and tends to focus on outcomes we can measure robustly. When so much of what makes schools important is currently extremely hard to quantify- well-being springs to mind- we cannot and should not limit our decision making to that which can be precisely measured. Bear this in mind the next time someone whips out their favourite EEF report in a staff meeting.