Hi ,
Last week, OpenAI made its latest chatbot, ChatGPT, available to the public. And unlike Microsoft's Tay or Meta's BlenderBot3, the internet has yet to turn it into a racist,
sexist troll. In fact, the feedback has been overwhelmingly positive.
ChatGPT is a conversational chatbot capable of doing everything from debugging computer code to composing poetry.
Concerns have already been raised about the potential misuse of this technology. For example, the creation of misinformation or as a tool for academic cheating. And the developers have acknowledged that the "guard rails" incorporated into the system to "prevent ChatGPT from responding to harmful requests" aren't foolproof.
Yet, that
the developers of ChatGPT saw fit to include guard rails at all is a positive sign of where public expectations currently stand with regard to AI ethics and safety.
Here's the thing...
Anytime a new AI-based system is released to the public, you can guarantee the public will do their best to "break it".
If you're developing a public-facing AI-based system, it is worth investing the time needed to build the necessary guard rails into the system. Because if you don't, the public
will figure it out pretty quickly.
And no one wants to be remembered as the creator of the chatbot that "broke bad".
Talk again soon,
Dr Genevieve Hayes.
p.s. If you want to give ChatGPT a try, you can do so for free until the end of December 2022 HERE. More on my own experiments with ChatGPT in my next post.