18 August 2020

Statistical models used to award exam results to be reviewed

18 August 2020

The Office for Statistics Regulation (OSR) will carry out a review of how the statistical models designed for awarding this year’s exam results were developed.

The OSR said on Tuesday its review will consider to what extent qualifications’ regulators across the UK developed their models in line with the principles set out in the code of practice for statistics.

It will not review the implications of the model on individual results or take any view on the most appropriate way to award grades in the absence of exams, the OSR added.

The review comes after the The Royal Statistical Society (RSS) wrote to the OSR highlighting its concerns about the models used to determine students’ grades following the cancellation of exams during the coronavirus outbreak.

The Government announced a U-turn on Monday when it said students would be able to receive grades based on their teachers’ estimates, following anger over the downgrading of thousands of A-level results.

Ed Humpherson, OSR director general for regulation, said that despite the changes to awarding grades there was “still value in a review”.

In a letter to the RSS he said the OSR therefore planned to undertake a review focused on the process of developing the statistical models.

Mr Humpherson added: “Our review will consider the extent to which the organisations developing the models complied with the principles set out in the code of practice for statistics.

“Our review will seek to highlight learning from the challenges faced through these unprecedented circumstances.

“We will not review the implications of the model on individual results or take any view on the most appropriate way to award exam grades in the absence of exams.”

The OSR said its plans to publish the findings from its review in September.

RSS vice-president for education and statistical literacy Sharon Witherspoon said: “We are glad that the Office for Statistics Regulation has listened to our call for an urgent review into the process for developing the statistical models used by exam regulators.

“The lack of transparency around the process has not only caused significant distress for thousands of students, it has threatened to undermine public trust in statistics and their use. It is therefore right that the Office for Statistics Regulation looks into these issues to ensure this does not happen again.”

Statistical models were originally used to decide what grades to award A-level students in England, Wales and Northern Ireland, and Scottish Higher students in Scotland. The models used a mixture of old and new data.

Teachers were asked to give every pupil who was due to be taking an exam this year an estimated grade in every subject. They were also asked to rank those pupils, comparing them with every other pupil at the school within the same estimated grade.

This information was passed to the exam regulators, who put it through an algorithm – a pre-designed set of mathematical rules – to come up with the final grades.

Grade adjustments for A-levels awarded in England (PA Graphics)

The algorithm used a historic profile for each school, based on how pupils had performed in every exam subject over the previous three years.

It also looked at how pupils performed earlier in their education – for example, how well A-level students did in their GCSEs – and tried to work out the proportion of pupils who could be matched to their prior attainment.

Rough grades were then assigned to students, marks assigned to students based on those rough grades, and lastly the national grade boundaries and final grades were decided.

For subjects where only a small number of pupils had been entered for examination in 2020, much of this process was skipped and only the teacher’s predicted grades were used to determine the final marks.

The best videos delivered daily

Watch the stories that matter, right from your inbox