claim
Over-generalization causes large language models to hallucinate because models compress vast knowledge into parameters, which can lead to the loss or inaccurate approximation of nuance and detail.

Authors

Sources

Referenced by nodes (2)