Hi ,
The return of Robodebt to the news this week highlights how badly things can go wrong when ethical principles are overlooked by those using AI.
Robodebt refers to the automated debt recovery system
used by the Australian Taxation Office between July 2015 and November 2019. The system used a computer algorithm to match welfare payments with averaged income data and then automatically issue debt notices where overpayments were identified. However, the use of averaged income data, instead of actual income data, was incorrect, leading to $1.763 billion of unlawfully claimed debt and a Royal Commission.
The problem with Robodebt was not the use of computer-based data matching to identify potential overpayments - given the size of the data sets involved, manual data analysis would have been impossible. Rather, it was the lack of human oversight of the system and the denial of transparency and contestability to those who did challenge their debts.
In November 2019, the Australian Government published its voluntary AI Ethics Framework. Among the principles contained in this framework are:
- Reliability and Safety: AI systems should reliably operate in accordance with their intended purpose.
- Transparency and Explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
- Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
- Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
Had these principles been followed, the Robodebt fiasco could easily have been avoided.
The lesson of Robodebt is not that businesses and government bodies shouldn't use AI. Instead, it is that, if an organisation wishes to use AI, it should be done in a responsible way
that is respectful of human rights.
What lessons did you learn from Robodebt? Hit reply to let me know.
Talk again soon,
Dr Genevieve Hayes.
p.s. I recently interviewed Dr Fei Huang on the topic of fairness and anti-discrimination in machine learning for my podcast Value Driven Data Science. The episode goes live on Monday.