reference
Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Hosseini, Mark Johnson, and Mark Steedman investigated the sources of hallucinations in large language models specifically during inference tasks in their 2023 paper published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
Authors
Sources
- A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- Association for Computational Linguistics entity