AI text generators like GPT-3 are really impressive. But there’s one fundamental principle that you should keep in mind whenever you’re looking at anything generated by such a system:
It doesn’t evaluate the truth of what it’s saying.
Sometimes the generated text says things that are true. Sometimes it doesn’t. The generator doesn’t distinguish between those situations.
I know that I’ve said variations on that before, but I think it’s a point worth repeating.
Today’s instance of this statement was inspired by the new ChatGPT chatbot. I just saw a tweet praising ChatGPT’s ability to explain a complicated regular expression; I agree that the explanation provided looks really impressive, but unfortunately, it’s wrong. But lots of people (including the person who posted the transcript of the chat) seemed to think that it was correct.
The regex in question is really weird—it doesn’t at all do what it appears to have been intended to do. ChatGPT, impressively, gives a good explanation of what the regex was intended to do—but that explanation gets several details outright wrong, including saying that one part is optional when it’s really a different part that’s optional.
Again, there are lots of really impressive things about this answer. But if a human relies on this answer to be factually accurate, they’re going to run into problems.
Another example: ChatGPT explains the factors of a specified polynomial, but gives the wrong answer.
One of the replies to the regex tweet said something along the lines of ~“Who cares if it’s wrong? It’s 99% of the way there. A future version will be able to look impressive and give the right answer!”~
(My tildes there indicate that that’s my paraphrase, not a quote.)
And it may well be true that a future version will fact-check itself.
But for now, don’t believe anything that an AI text-generator says, unless it’s been fact-checked by a reliable and knowledgeable human.
(Original Facebook post.)