Generative AI Lies

Examples of generative AI making stuff up

Category: Generated legal documents

  • Law firm dissolved

    ()

    In Mississippi, a lawyer included AI-generated fake citations and was told not to do that, but kept doing it.

    The latest judge to receive her fake citations was not amused, and has issued sanctions against her and the two partners in the firm that she worked for.

    If I’m understanding right, the partners have now dissolved the firm.

    (It looks like Ms. Watson, the lawyer who used AI-generated fake citations in ten different cases, may be the daughter of one of the two partners.)

    The judge reacted strongly to Ms. Watson’s behavior:

    In light of repeated warnings from federal courts about the risk of hallucinated cases, as well as CLE trainings she attended, direct notice and knowledge of the same prior mistakes, her violation of the Firm’s AI policy, and the sheer number of filings, Ms. Watson’s misconduct is particularly egregious and prolific.

    The partners are also being sanctioned for failing to notice the problems. For example:

    a large portion of Billups’ argument relies on a case styled Jackson v. Gautreaux, 3 F. 4th 182, 190 (5th Cir. 2021). […] In fact, this case is cited eight times, even arguing that a jury should be instructed under its holding. […] In reality, Jackson is an excessive force and failure to train case and is wholly irrelevant to the case at bar. A seasoned attorney examining the brief should have read a case so heavily relied upon. Had he done so, he would have easily discovered the problems.

    The judge noted that the usual penalty for this sort of thing has been fines, but quoted another case about why fines are insufficient:

    “If fines and public embarrassment were effective deterrents, there would not be so many [AI misuse] cases to cite.”

    (Given that there are so many such cases, I probably won’t post about all the ones I hear about, but this one did seem especially egregious.)


  • Attorney fine

    (, )

    A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT.

    The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion stating that 21 of 23 quotes from cases cited in the attorney’s opening brief were made up.


    Side note:

    I saw a Guardian opinion piece yesterday that, after it pointed out some issues with a generative-AI product, quoted an authoritative-sounding source as saying that you have to be careful about using generative AI, but it’s fine to use it for some tasks, such as factchecking.

    I dropped a note to the Guardian’s readers’ editor and to the person who said to use LLMs for factchecking, pointing out to them that you absolutely should never use LLMs to check facts, but I don’t expect that that note will have much effect.


    (Original Facebook post.)


  • AI Hallucination Cases database

    ()

    That thing where lawyers (and others) use generative AI in court filings, and the AI makes stuff up? Now there’s a list of such situations: the AI Hallucination Cases database.

    “This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, but also other types of arguments.”

    “While seeking to be exhaustive (201 cases identified so far), it is a work in progress and will expand as new examples emerge.”

    (Original Facebook post.)


  • Expert testimony

    ()

    A federal court judge has thrown out expert testimony from a Stanford University artificial intelligence and misinformation professor[, Jeff Hancock], saying his submission of fake information made up by an AI chatbot ‘shatters’ his credibility.”

    “At Stanford, students can be suspended and ordered to do community service for using an AI chatbot to ‘substantially complete an assignment or exam’ without instructor permission. The school has repeatedly declined to respond to questions […] about whether Hancock would face disciplinary measures.”

    (Original Facebook post.)


  • Cohen legal filing

    (, )

    Michael Cohen [(Trump’s former lawyer)] used fake cases created by AI in bid to end [Cohen’s] probation”

    “In the filing, Cohen wrote that he had not kept up with ‘emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.’ To him, he said, Google Bard seemed to be a ‘supercharged search engine.’”

    (Original Facebook post.)