Statistical models often feed on large-scale historical trends and reflect them in data patterns – in this case, the higher performance of typically wealthier schools. New research by UCL’s Institute of Education found that 23% of comprehensive school students were under-predicted by the models compared with just 11% of grammar and private school pupils.
The total deference to a standardised algorithm in place of teacher input is an example of how taking the human element out of decision-making can sow confusion and distrust. AI should be overlaid with human expertise to empower people – not disempower them by perpetuating bias.
After the Home Office’s recent decision to scrap their AI tool for visa applications, here is yet another reminder that unless ethical considerations are prioritised in AI implementation, an operational and PR disaster awaits.
The UK government’s eventual U-turn should be a warning sign to AI adopters: when ethical oversights are made in the early AI design and build stage, huge operational and reputational costs reveal themselves further down the line.