measurement
An empirical study on legal question-answering found that GPT-3.5 hallucinates in 69% of outputs, while LLaMA-2 hallucinates in 88% of outputs, when tested against a custom set of factual US case queries.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (1)
- U.S. location