We Compare AI

Hallucination

Core Concepts
Simple Definition

When an AI model generates confident-sounding but factually incorrect or fabricated information.

Full Explanation

Hallucination is one of the biggest challenges with LLMs. Models are trained to produce plausible-sounding text, not necessarily true text. They can invent citations, facts, statistics, and even entire papers that don't exist — stated with complete confidence. Mitigation strategies include RAG (grounding in real documents), web search integration, and using models with knowledge cutoffs close to the present.

Example

An LLM might confidently cite a scientific paper that doesn't exist, with a real-sounding author and journal name.

Last verified: 2026-03-30← Back to Glossary