claim
Small language models tend to produce hallucinations that are obviously wrong or awkwardly phrased, making them easier to detect.

Authors

Sources

Referenced by nodes (2)