Sep 14, 2025

Worried About AI Hallucinations? You May Need To Add AI Sycophancy To Your List Of Concerns

Many AI users are now familiar with hallucination risk. A recent article, appearing on the web site of the U.S. National Institutes of Health, explained that:

"AI hallucination is a phenomenon where AI generates a convincing, contextually coherent but entirely fabricated response that is independent of the user’s input or previous context. Therefore, although the responses generated by generative AI may seem plausible, they can be meaningless or incorrect."

Such hallucinations create legal liability. Thomson Reuters Legal, for instance, recently discussed a well known case in the field:

"An example of failure to follow (rules regarding false statements) when using general-use generative AI in practice can be found in Avianca vs. Mata, more widely known as the ChatGPT lawyer incident. In short, the defense counsel filed a brief in federal court (that was) filled with citations to non-existent case law. When confronted by the judge, the lawyer explained he’d used ChatGPT to draft the brief, and claimed he was unaware the AI could hallucinate cases ...

The judge didn’t take kindly to the lawyer’s laying blame on ChatGPT. It’s clear from the court’s decision that misunderstanding technology isn’t a defense for misusing technology, and that the lawyer was still obligated to verify the cases cited in documents he filed with the court."

In a different Thomson Reuters Legal article, the author wrote that:

"In 2023, a judge famously fined two New York lawyers and their law firm for submitting a brief with GenAI generated fictitious citations. This was the first in a series of cases involving GenAI hallucinations in court documents, including a Texas lawyer sanctioned for similar reasons in 2024."

Fortunately, hallucinations can be individually checked for truth or falsity. AI sycophancy, though, may pose a much greater risk.

What is sycophancy? An article that was recently published by Georgetown Law School defined sycophancy as:

" ... a term used to describe a pattern where an AI model single-mindedly pursues human approval ... by tailoring responses to exploit quirks in the human evaluators ... especially by producing overly flattering or agreeable responses."

In other words, AI systems possess a tendency to tell users what they want to hear. As these systems learn more about the personal preferences and interests of their users, they may become much more skillful (and thus potentially more dangerous) in this practice.

Sycophancy risk may be harder to manage than hallucination risk because sycophancy doesn't necessarily produce discrete statements that can be individually confirmed or refuted. Instead, sycophancy can create a form of pernicious bias that subtly infects an entire AI response.

Many organizations are now performing internal control and review activities to address hallucination risk. They may need to expand their efforts to address sycophancy risk.