claim
Large language models tend to produce hallucinations that are fluent, internally consistent, and superficially plausible, which makes them dangerous for users unable to independently verify the claims.

Authors

Sources

Referenced by nodes (2)