Generative AI Lies

Examples of generative AI making stuff up

Salt on fires

David Levine gives a particularly strong example of the kinds of problems that we’re now facing with LLM-generated text: “is it a good idea to use salt to put out a kitchen fire?

A friend of a friend, elsewhere, pointed out that this problem isn’t new. That’s true to some extent; we’ve had sites with bad information that have used SEO to show up high in search results for a long time. (And, of course, we’ve had humans disseminating false information for much longer.)

But I think that the main things that are new and different here are:

  • The speed at which LLMs can generate huge quantities of text. It’s easier and faster than ever before to create lots of text to fill up fake websites.
  • The plausibility/authoritativeness of tone. LLMs are really good at generating grammatically correct English sentences that sound like people who know what they’re talking about sound like. Ever since the first time I encountered LLM-generated text, I’ve been stumbling over this—if I don’t think carefully about it, it’s easy for me to believe that what the LLM says must be true. And that’s despite the fact that I’ve been pointing out and complaining about false and misleading stuff online since, oh, the ’90s if not earlier. (I remember the good old days of the alt.folklore.urban newgroup…)

Both of those things are mostly a difference of degree, not of kind. It has always been possible to generate large quantities of false but authoritative-sounding text. But in the past, it took more time and work and skill to do that well.

Or to put that another way:

LLMs have democratized misinformation-generation.

(Original Facebook post.)

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *