Hallucination
When an AI model generates plausible-sounding but factually incorrect or fabricated information, presenting it with the same confidence as accurate responses.
Hallucination remains one of the most significant challenges for deploying AI in high-stakes applications like healthcare, legal work, and journalism. Human evaluators fail to identify AI-generated text 73% of the time, making hallucinations particularly dangerous. The problem is a key driver of the AI safety research field and has prompted calls for transparency requirements in AI regulation, including the EU AI Act's disclosure provisions.
Related Terms
Artificial General Intelligence (AGI)
A hypothetical form of AI that can understand, learn, and apply knowledge across any intellectual task at or above human level, rather than being specialized for specific tasks.
AI Alignment
The research field focused on ensuring AI systems behave in accordance with human values and intentions, particularly as systems become more capable.
AI Safety
The interdisciplinary field focused on preventing AI systems from causing harm, encompassing alignment, robustness, interpretability, and governance of AI technologies.
ChatGPT
OpenAI's conversational AI assistant, launched in November 2022, which catalyzed the current generative AI boom by demonstrating the capabilities of large language models to a mainstream audience.
Deepfake
AI-generated synthetic media — images, video, or audio — that realistically depict events or statements that never occurred, created using deep learning techniques.
Fine-Tuning
The process of further training a pre-trained AI model on a specific, smaller dataset to specialize it for a particular task or domain, requiring far less compute than training from scratch.
AI Economy Pulse
Weekly AI economy data in your inbox. Free forever.
Latest: “AI Investment Hits $42B in Q1 2026 — Here's Where It Went”
No spam, ever. Unsubscribe anytime.