reference
The paper 'A Survey of Hallucination in “Large” Foundation Models' categorizes research papers regarding hallucinations in text by Large Language Models (LLMs), Multilingual LLMs, and Domain-specific LLMs, while also surveying papers on detection, mitigation, tasks, datasets, and evaluation metrics.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (1)
- Large Language Models concept