Hi ,
If a machine can only do what it's programmed to do, can it ever make a mistake?
After all, if it's correctly following its program, then the machine would be "right", even when
it seems "wrong".
This is an argument that was recently put to me by friend-of-the-list Rod Aparicio, and it seemed apt in light of my last post about
ChatGPT's inability to own its mistakes.
By this reasoning, the new ChatGPT disclaimer, that "ChatGPT can make mistakes", is incorrect because any mistakes made by ChatGPT are those of the human programmer's - and of the human end-users of its outputs.
Here's the thing...
Whenever something goes wrong, human instinct is to search for someone to blame. And when it comes to finding a scapegoat, nothing beats a machine.
Yet, in blaming a machine (or generative AI), we're admitting
we've surrendered our decision making to that machine.
We are effectively becoming "machines" ourselves by blindly following the machine's decisions.
No one likes to make mistakes. Yet, it is our ability to get things "wrong" that sets us apart from the machines. It
allows us to think beyond our "programming", come up with new ideas, and deal with the unexpected.
Those are the skills needed to succeed in the post-ChatGPT world.
To err is human, and for that, we should be proud.
Talk again soon,
Dr Genevieve Hayes.