Why do you care?
School leaders are right to seek evidence that a resource that may look great in a demo will actually produce GCSE gains at the end of the year. The concern is either the program may not produce gains at all or – if schools are targeting specific groups of pupils for intervention – it may not work with that population of pupils.
Buying and implementing an educational program that doesn’t work is not only a waste of money, it’s a massive waste of time and focus for the leadership team and staff – especially with suppliers now offering three- and even five-year deals.
But hold on, I hear you say. All suppliers claim great things about their products. So how do you sort the wheat from the chaff?
Here’s the inside scoop…
Here’s my take as someone who’s been working in the field for over 23 years…
Suppliers may HAVE a research study but don’t always publish it
Having independent research behind an educational product is gold dust for the supplier. Many – even most – suppliers commission such studies attempting to show their products are effective.
Problems arise, however, when research studies either don’t show a particular product is effective or contains some negative findings or where the key findings are heavily qualified in the final report. In this situation, many suppliers may decide they can’t risk publishing the research for fear it might backfire.
Five red flags to watch out for
- No research study on a supplier’s website. If a research study is available, expect it to be very obvious on their website.
- A very popular product with research may have research that focuses on only a handful of schools. In such a case, there could be a selection bias at work.
- Quotes from happy customers are very helpful but not the same as proof.
- Explanations about ‘why it works’ are very useful information for sure. But not the same as actual proof.
- When a supplier says ‘designed to’ raise attainment, ask ‘Well, does it or doesn’t it raise attainment?’ And where’s the proof? Does it actually work?
What to look for
- A statistical research study
- By a trusted independent source
- Large sample size compared to the supplier’s user-base
- Pupil level data not school level data
- High confidence limits – 90% or better
An example of sound independent research
The Impact of E-Learning research studies by FFT (Fischer Family Trust) commissioned over a ten-year period concluded that students with 10 task hours of e-learning using SAM Learning achieve significantly better than expected GCSE grades compared to similar students based on key stage 2 fine graded level and several other school- and pupil-level factors.
The research includes hundreds of thousands of students per year. FFT worked to 95% confidence limits in the study, which means that each data point is 95% certain to be due to the use of e-learning rather than other factors.
The results show that all students who used SAM Learning for 10 hours or more got one grade better in two subjects – equivalent to a Progress 8 score of +0.2. However, students in the lowest 20% by prior attainment improved nearly double the average and achieved one grade better than expected in 3.5 subjects, equivalent to a Progress 8 score of + 0.35. It’s not a coincidence that the game mechanics layer within SAM Learning is particularly appealing to students with poor prior attainment such as white British boys.
By the way, 10 task hours of usage requires about two assigned activities per teaching week in any subject during the school year.
Mike Treadaway, the unassuming genius who taught me about solid research
Mike Treadaway is the unassuming genius who led FFT in the early days and developed the statistical models behind FFT predicted student outcomes. I don’t think people realise the scale of Mike’s achievement.
In 2011, I was attending the FASS conference for Florida school district superintendents in Tampa. The big keynote presentation was about new pioneering work in the United States that aimed to for the first time to measure educational performance in terms of progress as opposed to attainment.
The presenters were from a technology company practising what today would be called Data Analytics or Big Data. They proudly demonstrated that their model could predict student outcomes with 70% confidence based on prior attainment and other factors.
As I sat in the audience, I reflected that FFT’s model developed by Mike Treadaway had 90% predictive ability of student outcomes as early as 2001.
Without Mike Treadaway’s pioneering work in the early 2000s I don’t believe the UK would have progress (as opposed to attainment) as the main measure of success in our education system today. I think Mike can take personal credit for accelerating that process by at least ten years. It’s a massive contribution to the education system of this country.
Thank you Mike!
07 Jul 2017