General AIFeatured
Hallucination
Definition
When an AI model generates plausible-sounding but factually incorrect or fabricated information.In-Depth Explanation
Hallucinations occur because language models generate statistically likely text rather than retrieving verified facts. They can confidently state false information, invent citations, or create fictional events. Mitigation strategies include RAG, fact-checking, and improved training techniques.
Real-World Example
An LLM might confidently cite a research paper that does not exist when asked for sources.
0 views0 found helpful