Hi ,
You don't need to understand how an aeroplane works to catch a flight from Melbourne to LA. Provided you trust that plane won’t fall out of the sky somewhere over the Pacific, you'll happily board your flight.
And few people - data scientists included - can explain exactly how LLMs work. But because people trust the outputs are correct (whether rightly so or not), GenAI chatbots, such as ChatGPT, have taken the world by storm.
Explainable AI is one of the hottest topics in AI these days and there are an increasing number of domains
where model transparency is now mandated. However, there are a lot of processes where explainable models just won't work.
You can't build a self-driving car or a GenAI chatbot using nothing but easily explainable models.
Here's the
thing...
You don't have to.
For most use cases, people don't need to understand how a model is producing its outputs. They just need to trust they are correct.
The challenge
comes in building that trust.
And the effort required to build that trust is directly proportional to model complexity.
More to follow next time,
Dr
Genevieve Hayes.