Hallucination
Core ConceptsWhen an AI model generates confident-sounding but factually incorrect or fabricated information.
Full Explanation
Hallucination is one of the biggest challenges with LLMs. Models are trained to produce plausible-sounding text, not necessarily true text. They can invent citations, facts, statistics, and even entire papers that don't exist — stated with complete confidence. Mitigation strategies include RAG (grounding in real documents), web search integration, and using models with knowledge cutoffs close to the present.
An LLM might confidently cite a scientific paper that doesn't exist, with a real-sounding author and journal name.
Related Terms
A technique that enhances LLM responses by retrieving relevant documents from an external knowledge base before generating an answer.
Connecting AI model responses to verified, real-world information sources to reduce hallucination and improve accuracy.
A standardized test used to measure and compare AI model capabilities across specific tasks.