Relations (1)

related 1.00 — strongly supporting 1 fact

The concept of hallucination is directly linked to LLM-based agents as a primary research challenge, as evidenced by the survey paper detailing their taxonomy and methods in [1].

Facts (1)

Sources
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com GitHub 1 fact
referenceThe paper "LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions" by Lin et al. (2025) surveys the taxonomy, methods, and future directions regarding hallucinations in LLM-based agents.