claim
The four major categories of root causes for large language model hallucinations are training data issues, exposure bias during learning, structural knowledge gaps, and generation pressure at inference time.

Authors

Sources

Referenced by nodes (3)