claim
Hallucination in large language models is linked to pretraining biases and architectural limits, according to research by Kadavath et al. (2022), Bang and Madotto (2023), and Chen et al. (2023).

Authors

Sources

Referenced by nodes (2)