Generative AI Lies

Examples of generative AI making stuff up

Category: Generated articles

  • Freelance articles

    ()

    “A suspicious pitch [for an article] from a freelancer led editor Nicholas Hune-Brown to dig into their past work. By the end, four publications, including The Guardian and Dwell, had removed articles from their sites.”

    Hune-Brown writes:

    “I was embarrassed. I had been naively operating with a pre-ChatGPT mindset, still assuming a pitch’s ideas and prose were actually connected to the person who sent it.”

    “this generation’s internet scammers are […] taking advantage of an ecosystem uniquely susceptible to fraud—where publications with prestigious names publish rickety journalism under their brands, where fact-checkers have been axed and editors are overworked, where technology has made falsifying pitches and entire articles trivially easy[…]”

    (Original Facebook post.)


  • Wikipedia

    ()

    “During a recent [Wikipedia] community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary.”

    —“AI Is Tearing Wikipedia Apart

    “The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked.”

    Article also talks about the importance of checking all of the citations that GPT provides, given that they’re often fictional.

    (Original Facebook post.)