Hi ,
We all know what it means for a human to discriminate against another human, but how can a model or AI discriminate against
someone?
In the latest episode of Value Driven Data Science, I'm joined by Dr Fei Huang, a senior lecturer at the University of New South Wales's School of Risk and Actuarial Studies, to discuss the importance of considering fairness and avoiding discrimination when developing machine learning models for your business.
Some of the things we discuss in this episode include:
- Direct vs indirect discrimination and how data scientists can create discriminatory machine learning models without ever intending to.
- What it means for a model to be fair and the trade-off that exists between individual and group fairness.
- How fairness and
discrimination come up in different applications of machine learning, including insurance.
- How different jurisdictions are currently addressing algorithmic discrimination, through regulation and other means.
- What this means for organisations who currently make use of machine learning models or would like to in the future.
- Why organisations should start considering fairness and discrimination when using analytics and what they can do about it now.
You can listen to this episode by clicking the button below, or find it on Apple Podcasts, Amazon Music, Spotify or Google Podcasts.