Spoiler: The winner is the real (human) financial advisor.
I feel like the core premise of this article—and other similar articles—is deeply flawed. The people who are writing these articles seem to be starting from the idea that there’s a real open question as to whether GPT in its current form can do as good a job as a human expert, and then they take that question seriously and do a comparison as if the two were in some way comparable.
Whereas, in reality, ChatGPT makes things up, and thus is not a reliable replacement for anything. (Unless you’re looking for something that will lie to you.)
In this article, ChatGPT gives multiple wildly incorrect answers when asked to do fairly simple financial calculations. But the article author doesn’t say “Therefore, don’t trust anything GPT tells you”; instead, they just say “Methinks your math is not so good, ChatGPT.”
Doing accurate math is a core requirement for giving financial advice! It’s not an optional thing that slightly reduces the quality of GPT’s responses!
(I may be overreacting to my perceptions of the author’s tone. I just hate to see discussions of GPT that take GPT seriously as a potential source of factual information.)
In summary:
Remember that generative AI gives false answers. Do not rely on it for anything factual.
Leave a Reply