Hi ,
Here's a cool experiment to try with ChatGPT.
Ask ChatGPT to solve a maths problem, such as a quadratic equation.
However, first ask it to solve the
problem by stating the answer, then giving its reasoning. Then, try again, this time asking it to solve the problem by stating its reasoning and then giving the answer.
For example:
- Version 1: Solve for x using the
quadratic formula: (x + 3)(x - 2) = 1. State the answer first and then give your reasoning.
- Version 2: Solve for x using the quadratic formula: (x + 3)(x - 2) = 1. Give your reasoning first and then state the answer.
Did you get the same answer?
When I tried this, the answer was "no".
ChatGPT found the correct answer using the Version 2 prompt. However, using the Version 1 prompt, it initially gave the wrong answer, then reasoned its way to the right answer, but upon realising the discrepancy, it tried to convince me there was something wrong with my initial problem instead of its initial guess.
And it's now been shown that ChatGPT is far more likely to get the right answer by taking the Version 2 approach than that outlined in Version 1.
Once
it has settled on an answer, ChatGPT uses reasoning to justify that incorrect answer, rather than accepting it may have possibly made a mistake.
Here's the thing...
How often do we behave this way in our own lives?
No one likes admitting they're wrong. But if you're looking for ways to set yourself apart from AI, this one is clear.
Unlike ChatGPT, we can change the way we think and learn from our mistakes. We can use our reasoning to seek the truth, rather than just doubling down on our errors.
The only thing standing in the way is our own human pride.
Talk again soon,
Dr Genevieve Hayes.