Hallucination

What is AI Hallucination?

Why?

Some Real Life Cases:

Case 1: Lawyer Citing Fake Cases Generated by ChatGPT

In 2023, New York lawyer Steven Schwartz utilized ChatGPT to draft a legal brief for a personal injury case. The brief included several fictitious court cases fabricated by the AI, which opposing counsel could not verify. When challenged, Schwartz and his colleague, Peter LoDuca, doubled down on their assertions until the court ordered them to provide the cases. Upon failing to do so, they were fined $5,000. Furthermore, the court mandated that any future filings involving generative AI content must explicitly disclose such use to ensure accuracy checks.

Case 2: AI-Generated Misinformation

AI-generated false information can spread rapidly, leading to public misunderstanding. An example of this occurred with Google’s Bard chatbot, which incorrectly claimed that the James Webb Space Telescope had taken the first image of an exoplanet. This misinformation quickly disseminated after its release, demonstrating the potential for AI to propagate erroneous news​.

https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo

Mitigation Techniques

Here is a page on the OpenAI Cookbook that provides guidelines and examples for developing guardrails to prevent hallucinations in AI models.

Last updated

Was this helpful?