Blog

Insights, tutorials, and updates on high-assurance AI systems, neuro-symbolic programming, and vericoding.

Ghost Entities: Why LLM Hallucinations in Entity Extraction Are a Serious Downstream Risk

Hallucinated entities and relationships look identical to real ones inside a knowledge graph. We measured how often frontier models inject facts from parametric memory rather than from your documents — and found rates as high as 73% on a single document. Here is why that matters and what to do about it.

by gavin
AI ResearchEntity ExtractionHallucinationsKnowledge GraphsLLMs