Getty Images / WIRED
The government’s attempt to grade thousands of students by algorithm has been a disaster. Hundreds of student protesters, gathering outside the Department for Education, on August 16 made this abundantly clear. “Fuck the algorithm,” they chanted.
Amid the fallout, legal, policy and data experts are now calling for politicians to think again about the development process, transparency, and risks of such algorithmic systems to prevent another catastrophic failure. If they don’t, future systems risk introducing more inequality. “It’s really brought to the fore the potential dangers and unfairness that this kind of model can have on so many people,” says Polly Sanderson, policy counsel at the non-profit Future of Privacy Forum.
The ‘algorithm’ itself is complex but relatively dumb – it is a statistical model, which isn’t powered by machine learning or artificial intelligence systems. The system was designed by the exams regulator, Ofqual, to ensure results were standardised across the country. But it was hugely flawed and placed constraints on how many pupils could achieve certain grades and based its outputs on a schools’ prior performance, downgrading around 40 per cent of predicted results.
Political decisions in the development of the system and a basic misunderstanding of data have been blamed for the grading problems. Ultimately, the algorithm performed how it was designed and was the product of human-led decisions. “It’s more about the process and the questions around goals, risk mitigation measures, appropriate scrutiny and redress,” says Jenny Brennan, who researches AI and technology’s impact on society at the Ada Lovelace Institute. Rather than the algorithm getting it wrong, Brennan argues it was simply the wrong algorithm.
But this is only the beginning. More algorithmic decision making and decision augmenting systems will be used in the coming years. Unlike the approach taken for A-levels, future systems may include opaque AI-led decision making. Despite such risks there remain no clear picture of how public sector bodies – government, local councils, police forces and more – are using algorithmic systems for decision making. What is known about their use is often piecemeal or complied by individual researchers.
The A-levels algorithm isn’t the first major public sector failure. In fact, it isn’t even the first this month. At the start of August, the Home Office dropped its “racist” visa decision algorithm that graded people on their nationalities following a legal challenge from non-profit group Foxglove. (The organisation had threatened a similar challenge to the A-levels system). And there have been continued problems with the system powering Universal Credit outcomes.
So what can be learned from the latest failure?
People want individual results
The students who gathered outside the Department for Education are likely to be some of the first people to directly protest the decisions of an algorithm. They were joined by a huge outpouring of anger online as student after student recounted their personal story of unfairness – thousands of young people who had future plans ruined because of a bungled political approach.
“What this has shown is that people really care about individual justice,” says Reuben Binns, an associate professor of human centred computing at the University of Oxford. People want the decisions about their lives to be personal and not based on historical data over which they have no control. One size doesn’t fit all and people are mostly concerned with their own individual results and the fairness of them. That means each individual having their potential reflected in something and it not being the result of an aggregate, Binns adds.
Police built an AI to predict violent crime. It was seriously flawed
There needs to be transparency
Ofqual has published a huge amount of information about its algorithm – primarily this has been through a 319-page report on how the system works. The report details the statistical model’s accuracy, how the model was created, and reasonings for its behaviour.
It’s rare for this level of detail to be published, Brennan says. While these details are welcomed by researchers and analysts, they may not be useful for people who have been impacted by the algorithm.
“What was missing here was transparency and scrutiny around the goals of the algorithm, with plenty of time [for changes],” Brennan adds. She says there should have been more consideration of the impact of the algorithm as a whole and not just the individual aspects of it. “There is more that can be done around showing evidence of appropriate risk, and impact mitigation.”
The trend of grade inflation – people’s scores increasing year after year – has been a political issue in the UK for years. It has not been made clear how much political decisions had a bearing on the design and development of the final system. “Within the constraint of stopping grade inflation, they were bound to end up with some of these kinds of problems,” Binns says.
Ultimately, issues around a lack of transparency could erode future trust in algorithmic systems that can have positive impacts on society. “This means is that there’s not a public debate or awareness of things until after they’ve happened or until there are rumours,” says Rachel Coldicutt, an expert working on ethics and responsible use of tech. “The lack of openness is leading to a lack of trust, which is leading to a lot of speculation, which I think doesn’t help.”
Expert advice is needed
The statistical problems within the algorithm – such as the ability to only award a certain number of each grade per school – could have been spotted before it was deployed. But external expert advice was ignored months before results day.
Members of the Royal Statistical Society (RSS) offered to help Ofqual in April, but faced five-year non-disclosure agreements if they wanted to be involved in the project. The RSS experts said they believed some of the issues with the algorithm could have been avoided if independent expert advice was taken.
Advice also doesn’t have to come in the form of experts. It’s also crucial to include people who will be impacted by the system in the design and development process, says Sara Jordan, of the Future of Privacy Forum. “Students want things done at the micro level,” she says.
It must be easy to complain
There has been some confusion around the legal protections for those affected by algorithmic decision making. Under Europe’s General Data Protection Regulation and the UK Data Protection Act 2018, decisions about people’s lives that are solely automated are given greater protection to ensure unfairness and discrimination don’t occur.
Ofqual’s privacy statement for its algorithm said it did not believe the decisions counted as being fully automated, the line was also repeated by the data protection regulator, the Information Commissioner’s Office (ICO). Others disagree with this. A similar prediction in Norway has faced a run in with the country’s data protection regulator. Binns, who used to work at the ICO advising on artificial intelligence and machine learning, says the decisions that were made appear to be automated.
“Their argument that it isn’t automated decision making is just not plausible,” he says. “What grades people get is determined by what the algorithm says about how many students can get each grade,” he adds. “It’s already made a decision about what grades are available to you.”
Matt Burgess is WIRED’s deputy digital editor. He tweets from @mattburgess1
More great stories from WIRED
🚅 Night trains are brilliant. So why doesn’t the UK have any to Europe?
💉 The race is on to create a vaccine. This mRNA coronavirus vaccine is two breakthroughs in one
🎧 Need some peace? These are the best noise-cancelling headphones in 2020
🔊 Listen to The WIRED Podcast, the week in science, technology and culture, delivered every Friday
👉 Follow WIRED on Twitter, Instagram, Facebook and LinkedIn
Get The Email from WIRED, your no-nonsense briefing on all the biggest stories in technology, business and science. In your inbox every weekday at 12pm sharp.
Thank You. You have successfully subscribed to our newsletter. You will hear from us shortly.
Sorry, you have entered an invalid email. Please refresh and try again.