claim
Alignment tuning and tool utilization can help alleviate the issue of hallucination in Large Language Models.
Authors
Sources
- A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept