Anthropic CEO says AI hallucinates less than humans now, but there's a catch

Anthropic CEO says AI hallucinates less than humans now, but there's a catch

The CEO's statements challenge long-standing concerns about AI's tendency to hallucinate.

Advertisement
Anthropic CEO Dario AmodeiAnthropic CEO Dario Amodei
Business Today Desk
  • May 26, 2025,
  • Updated May 26, 2025 3:25 PM IST

At two recent high-profile events, VivaTech 2025 in Paris and Anthropic’s inaugural Code With Claude developer day, Anthropic CEO Dario Amodei made a bold and thought-provoking claim: artificial intelligence models may now hallucinate less frequently than humans, at least in well-defined factual settings.

The statement, repeated at both events, challenges long-standing concerns about AI's tendency to "hallucinate", a term used to describe when models like Claude, GPT, or Gemini confidently produce inaccurate or fabricated responses. According to Amodei, however, recent internal tests show that advanced models like Claude 3.5 have outperformed humans on structured factual quizzes.

Advertisement

“If you define hallucination as confidently saying something that's wrong, humans do that a lot,” Amodei said at VivaTech. He cited studies in which Claude models consistently delivered more accurate answers than human participants when responding to verifiable questions.

At Code With Claude, which also saw the launch of the new Claude Opus 4 and Claude Sonnet 4 models, Amodei reiterated his belief. According to TechCrunch, he responded to a question by saying, “It really depends on how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”

The upgraded Claude 4 models mark a significant milestone in Anthropic’s push toward artificial general intelligence (AGI), with improvements in memory, code generation, tool use, and writing quality. Claude Sonnet 4, in particular, scored 72.7% on the SWE-Bench benchmark, setting a new bar in software engineering performance for AI systems.

Advertisement

Despite the progress, Amodei was quick to clarify that hallucinations have not been eliminated entirely. In open-ended or loosely structured contexts, AI models are still prone to errors. He stressed that context, prompt phrasing, and use case critically influence a model’s reliability, especially in high-stakes scenarios like legal or medical advice.

His comments come in the wake of a courtroom incident where Anthropic’s Claude chatbot produced a false citation in a legal filing during a lawsuit involving music publishers. The company's legal team later had to apologise for the mistake, underscoring the lingering challenges around factual consistency.

Amodei also emphasised the need for clearer metrics across the industry. With no standard definition or benchmark for what constitutes a hallucination, measuring and ultimately reducing these errors remains difficult. “You can’t fix what you don’t measure precisely,” he warned.

Advertisement

While AI models are making strides in factual accuracy, Amodei’s remarks serve as a reminder that both human and machine intelligence have their flaws, and that understanding, measuring, and mitigating those flaws is the next frontier in AI development.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

At two recent high-profile events, VivaTech 2025 in Paris and Anthropic’s inaugural Code With Claude developer day, Anthropic CEO Dario Amodei made a bold and thought-provoking claim: artificial intelligence models may now hallucinate less frequently than humans, at least in well-defined factual settings.

The statement, repeated at both events, challenges long-standing concerns about AI's tendency to "hallucinate", a term used to describe when models like Claude, GPT, or Gemini confidently produce inaccurate or fabricated responses. According to Amodei, however, recent internal tests show that advanced models like Claude 3.5 have outperformed humans on structured factual quizzes.

Advertisement

“If you define hallucination as confidently saying something that's wrong, humans do that a lot,” Amodei said at VivaTech. He cited studies in which Claude models consistently delivered more accurate answers than human participants when responding to verifiable questions.

At Code With Claude, which also saw the launch of the new Claude Opus 4 and Claude Sonnet 4 models, Amodei reiterated his belief. According to TechCrunch, he responded to a question by saying, “It really depends on how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”

The upgraded Claude 4 models mark a significant milestone in Anthropic’s push toward artificial general intelligence (AGI), with improvements in memory, code generation, tool use, and writing quality. Claude Sonnet 4, in particular, scored 72.7% on the SWE-Bench benchmark, setting a new bar in software engineering performance for AI systems.

Advertisement

Despite the progress, Amodei was quick to clarify that hallucinations have not been eliminated entirely. In open-ended or loosely structured contexts, AI models are still prone to errors. He stressed that context, prompt phrasing, and use case critically influence a model’s reliability, especially in high-stakes scenarios like legal or medical advice.

His comments come in the wake of a courtroom incident where Anthropic’s Claude chatbot produced a false citation in a legal filing during a lawsuit involving music publishers. The company's legal team later had to apologise for the mistake, underscoring the lingering challenges around factual consistency.

Amodei also emphasised the need for clearer metrics across the industry. With no standard definition or benchmark for what constitutes a hallucination, measuring and ultimately reducing these errors remains difficult. “You can’t fix what you don’t measure precisely,” he warned.

Advertisement

While AI models are making strides in factual accuracy, Amodei’s remarks serve as a reminder that both human and machine intelligence have their flaws, and that understanding, measuring, and mitigating those flaws is the next frontier in AI development.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement