UPDATE: the Government have now reverted to Centre Assessed Grades: is this fair?!
There are two main ways in which is is NOT a fair way to resolve this situation:
- As noted below, in Stage Two of the original process, teachers and SLT had ALREADY started the work of the algorithm by trying to match grades to the grade profile average of the previous three years. This is why it was so unjust to pull them down further. But restoring the CAG still leaves quite a lot of students whose teacher-assessed grades were changed in order to fit the profile. You could argue that in any case students on the lower end of a grade might arbitrarily lose that grade due to grade boundary changes in the normal system, but this is not the normal system. It is important to remember, however, that students never got a proper opportunity to prove where in the grade boundary they sat.
- Leaving the CAGs unmoderated means that any disparity in applying the algorithm between schools will never get picked up, leaving some cohorts treated more harshly than others.
The shambles that the government and Ofqual have presided over is still unfair on students, it leaves teachers to carry the can for moderating their students’ results downwards, following Ofqual’s instructions in good faith. I predict this will be a huge problem in GCSE results this week, when students will be aggrieved by their teachers’ apparently harsh assessments. The CAGs were never supposed to be the grades that we thought our students would get but the closest we could get to those grades within the stringent guidelines laid out by Ofqual. This subtlety will mostly be lost on students and parents.
I don’t know what the answer to this shambles is, but I do think it is fair to point out that it could have been done so much better. Off the top of my head, I can come up with a better and fairer process, but the government had both experts and plenty of time on their side. The system should have worked something like this, in my opinion:
- Ofqual give clear guidance as to how centres should moderate their teacher assessed marks into CAGs.
- Where there are small cohorts, or large anomalies, centres should have been given a form to fill in, providing more detailed evidence on both individual students attainment and any relevant circumstances (like a change of teacher) that should be taken into account.
- The exam boards could then have used the CAGs and the evidence to review and moderate. The presumption should have been in favour of awarding grades that centres could evidence rather than focusing blindly on avoiding grade inflation. This was never going to set a precedent or cause any major problems.
It is now too late to go back and do this properly, but the above steps could inform a centre-driven appeals process on behalf of students who have been hard done by. On the whole, we will only be talking a grade here and there but for a student who might have got into Manchester on AAB and now has ABC, this is cold comfort.
Are the A level results unfair? My personal view.
This post is an attempt to explain how results were awarded and offer some opinion as to the fairness of the process.
What do we mean by fair?
I take fairness in this context to mean that all students have the same opportunity to be awarded a mark that reflects their achievement over the two years of their course.
Is the normal process fair?
Any assessment process has some unfair elements. In the awarding of A levels some examples might be:
- a numerical mark on a continuous linear scale is arbitrarily converted into a discrete grade leaving students who have very similar levels of attainment with different grades
- students might be unwell or having difficulties during particular assessments periods (special circumstances try to mitigate this to some extent)
- the way in which knowledge is tested may suit some students better than others (a range of different assessment types can help here)
- examiners might make mistakes or not be competent (moderation and appeals mostly eliminate these problems)
- students are prepared better or worse by different teachers
The process this year
There were several stages and each have potentially unfair elements as follows.
Teachers use available evidence to come up with a mark for each student from which a rank order and a grade is calculated. This is basically an informal version of what usually happens, although there are some potential problems
- Students may have performed worse in mock written examinations than they would have done in the summer. This is a shame but at least all students had the same opportunity (illness during mocks is potentially problematic here)
- Students may or may not have finished coursework that might provide evidence of their ability. Students who were organised and had done work in a timely manner potentially do better, which is arguably unfair, although all students had the same opportunities to finish coursework up to this point.
This stage seems to me not to be notably more unfair than in normal years.
Staff, in consultation with their managers, moderated these marks so that they were broadly in line with results from the previous three years. Senior managers also checked and in some cases adjusted this moderation. The government wanted marks to be plausible in the context of past results. Overall this is reasonable but as noted below, it is not necessarily fair on every individual student.
- the main problem here is where the profile of current students does not fit well with the profile of previous students. The smaller the cohort the worse the problem is likely to be. If I have a very small group and they happen to have three out of six that are outstanding but previous years were quite average, that makes it hard to fairly award marks that are inline with past results
- Even in a larger cohort, if you have an unusual number of B students compared to normal years, some at the top end of the grade could (wrongly) be given an A if you have some of those left over in your nominal ‘quota’ whilst others might be (wrongly) awarded a C to keep the overall profile broadly in line.
Moderation of this sort is decidedly not fair to individual students. You could argue, however, that the process is fair overall, in the sense that this could potentially happen to any student equally. In other words is it fair in the same sense as Russian Roulette is fair – all had an equal opportunity to take a bullet.
Exam boards followed procedures set out by Ofqual to check the moderation of marks and adjust them so that overall the results were in line with those of previous years. This is a normal part of awarding A levels so it is reasonable in principle.
However, smaller cohorts (minority subjects such as music in 11-16 schools) and many subjects in private schools are difficult to moderate because the datasets are too small to be compared meaningfully. Ofqual therefore decided that where cohorts were less than 15 more weight would be given to the Centre Assessed Grades whereas in larger cohorts more weight would be given to the statistical modelling. This was compounded by Ofqual’s unreasonably rigid methodology. If their algorithm suggested that there would be 7 Cs in Music in a particular school’s cohort, then those Cs had to be allocated regardless of the teacher evidence (unless it was too small a cohort to make such a prediction).
THIS is hugely unfair because for the first time in this process students are not being treated in the same way. Those in smaller cohorts are getting a more accurate and individualised mark than those in larger cohorts. This is like putting more bullets in some Russian Roulette players chambers than others.
It is understandable that Ofqual wanted to keep results from being unreasonably inflated by the optimism of teachers. However, in these extraordinary circumstances, why is it more important to keep results within expected norms than to ensure students get as close to the ‘right’ mark as possible? The main problem with a one-off year of inflated results is that these students will be competing with students with ‘ordinary’ grades next year if they defer uni applications. Surely this is not so difficult to manage.
Grade profiles are a very blunt instrument. I would have put more weight on value added data. If a subject at a college normally has positive value added, it is unlikely that it will suddenly plummet. Boards should have checked moderated results against this benchmark and let CAGs stand if the value added was within normal ranges.
This years students deserved a compassionate process that gave individuals the best possible chance of a fair mark. My own view is that the system failed to achieve this.