claim
Current Large Language Models struggle most to detect hallucinated content that is semantically close to the truth.

Authors

Sources

Referenced by nodes (1)