“They thought they were making technological breakthroughs. It was an AI-sparked delusion”
Article about a couple of people whose interactions with LLM chatbots resulted in mental-health issues.
Here’s one example of what not to do when you’re interacting with a chatbot:
“Multiple times, Brooks asked the chatbot for what he calls ‘reality checks.’ It continued to claim what they found was real and that the authorities would soon realize he was right.”
(You can’t get valid reality checks from a chatbot. If a chatbot appears to be trying to convince you of something, please get a reality check from a human.)
…Content warning for the article mentioning cases of suicide and murder related to chatbots, but that’s not its focus.
Leave a Reply