Hallucination

What is AI Hallucination?

circle-exclamation

Why?

circle-check

Some Real Life Cases:

chevron-rightCase 1: Lawyer Citing Fake Cases Generated by ChatGPThashtag

In 2023, New York lawyer Steven Schwartz utilized ChatGPT to draft a legal brief for a personal injury case. The brief included several fictitious court cases fabricated by the AI, which opposing counsel could not verify. When challenged, Schwartz and his colleague, Peter LoDuca, doubled down on their assertions until the court ordered them to provide the cases. Upon failing to do so, they were fined $5,000. Furthermore, the court mandated that any future filings involving generative AI content must explicitly disclose such use to ensure accuracy checks.

chevron-rightCase 2: AI-Generated Misinformationhashtag

AI-generated false information can spread rapidly, leading to public misunderstanding. An example of this occurred with Google’s Bard chatbot, which incorrectly claimed that the James Webb Space Telescope had taken the first image of an exoplanet. This misinformation quickly disseminated after its release, demonstrating the potential for AI to propagate erroneous news​.

https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demoarrow-up-right

Mitigation Techniques

circle-check

Here is a page on the OpenAI Cookbook that provides guidelines and examples for developing guardrails to prevent hallucinations in AI models.

circle-check

Last updated

Was this helpful?