claim
Semantically similar hallucinations that are near the truth are the hardest for LLMs to detect.

Authors

Sources

Referenced by nodes (2)