claim
Hallucinations in Large Language Models (LLMs) are categorized into two dimensions: prompt-level issues and model-level behaviors.

Authors

Sources

Referenced by nodes (2)