Oh, look, another one.
In this article, “Proust, ChatGPT and the case of the forgotten quote,” an “author, academic and journalist” is looking for a quote from Proust, so she asks ChatGPT. She then spends a lot of time assuming that the information that ChatGPT provides is accurate (even after the second or third time that it blatantly contradicts itself) and fretting about how to ask ChatGPT followup questions without seeming rude.
Reminder:
CHATGPT MAKES THINGS UP.
IT IS NOT A RELIABLE SOURCE OF FACTUAL INFORMATION.
Authors and editors who publish articles that talk about ChatGPT as if it were a reliable source that has happened to get one or two things wrong are doing a disservice to their readers. (And the companies that have created Large Language Model AIs are doing a disservice to the world by not making it more explicitly clear that LLMs are not providing factual information.*)
I’ve now written to the author of this piece and to the Guardian to ask them not to write this kind of article in the future, but I have little hope of making an impression.
*The companies do provide disclaimers saying, more or less, that what the LLMs say might not be accurate. But judging by the articles I see, those disclaimers aren’t clear enough or aren’t visible enough.
Leave a Reply