Â
Hi ,
​
A recent study conducted at NYU found
that 36% of AI researchers surveyed believed it "plausible that decisions made by AI or machine-learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war."
Â
AI technology has not yet developed to the point where we need worry about a "Terminator" or "Matrix"-style rise of the machines. However,
what this study does highlight is the need for AI developers and end-users to be aware of the consequences of their actions in the technology space.
Â
And at the moment, one of the greatest ethical challenges facing many developers and end-users in the tech space is unfair bias and discrimination due to AI-based decision making.
Â
No AI system will ever be completely free of bias - nor, should it be.
Â
Bias is a necessary part of any machine learning model in order for it to work. For example, if you were building a model
to identify the best applicants for a job vacancy, you would need that model to be biased in favour of those who are best suited to the job. Otherwise, it would just be identifying people at random.
Â
The problem arises when the biases reflected in a model are not ones that are socially acceptable, such as bias in favour of merit, but rather
those that give rise to discrimination, such as racism or sexism.
Â
The good news is that many people in the data space are already well aware of the ethical issues arising from the use of AI. One of those people is Dr Fei Huang, whom I spoke to about fairness and discrimination in machine learning in a recent episode of my podcast, "Value Driven Data Science".
Â
Good behaviours can't just be learned overnight. They need to be ingrained over time. If AI developers can start adopting ethical behaviours now, when the consequences are less dire,
these behaviours are likely to carry forward into the future.Â
Â
And that might be enough to save humanity from the robopocalypse.
Â
Talk again soon,
Â
Dr Genevieve Hayes.
Â