claim
Semantically similar hallucinations that are near the truth are the hardest for LLMs to detect.
Authors
Sources
- MedHallu: Benchmark for Medical LLM Hallucination Detection www.emergentmind.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept