Hi ,
As a data scientist, one of my favourite movies of all time is The Matrix.
For the action sequences alone, few other films can compete.
But it also teaches a valuable lesson about AI adoption that many data scientists miss.
The Matrix takes place in a world that is dominated by machines that we humans created.
Yet, on the face of things, the "Matrix" universe isn't all that bad.
While using us as their energy source, the machines provide humanity with a simulated life that is far better than what we would otherwise experience in the real world.
All it costs is
control over our lives.
It is fundamental to human nature to resist being controlled.
So, our heroes choose to destroy the machines rather than live on with them as overlords.
The Matrix resonated with audiences because the desire for human autonomy isn't just philosophical, but plays out in our own lives every day - including in data science.
When analytics leader Bill Schmarzo appeared on my podcast, Value Driven Data Science, he shared this telling example. While implementing models to help advertisers
optimise their spending across the Yahoo ad network, initially adoption was poor.
Users were presented with an interface where they could either accept or reject model recommendations regarding advertising spend, but that was their only choice. Rather than giving the machine control, they decided to walk away.
When the interface was changed to allow users to override and change the results, however, the transformation was dramatic. Model adoption went through the roof, and advertisers began actively engaging with the system - rather than seeing it as something to avoid.
People are more likely to trust and use systems they can influence
and control - even if they rarely choose to exercise that power.
Do you present insights as take-it-or-leave-it recommendations - forcing stakeholders to surrender control? Or do you empower your end users to make better decisions while preserving their autonomy?
Just like the fate of humanity in The Matrix, the success of your models may ultimately depend on who maintains control.
Talk again soon,
Dr Genevieve Hayes.
p.s. You can hear Bill's full insights on AI adoption HERE.