measurement
In the most extreme cases observed by Giskard, instructions emphasizing conciseness resulted in a 20% decrease in hallucination resistance for Large Language Models.
Authors
Sources
- Phare LLM Benchmark: an analysis of hallucination in ... www.giskard.ai via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination resistance concept