reference
Ji et al. (2023) proposed a method for mitigating large language model hallucination via self-reflection, presented at the Findings of the Association for Computational Linguistics: EMNLP 2023.
Authors
Sources
- A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org via serper
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- LLM-as-a-judge concept
- large language model hallucination concept