Generative AI Lies

Examples of generative AI making stuff up

Scaling and reliability

“A common assumption is that scaling up [LLMs] will improve their reliability—for instance, by increasing the amount of data they are trained on, or the number of parameters they use to process information. However, more recent and larger versions of these language models have actually become more unreliable, not less, according to a new study.”

“This decrease in reliability is partly due to changes that made more recent models significantly less likely to say that they don’t know an answer, or to give a reply that doesn’t answer the question. Instead, later models are more likely to confidently generate an incorrect answer.”

(Article from Oct. 3.)

(Original Facebook post.)

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *