procedure
Techniques for detecting hallucinations in large language models include source comparison, where model-generated answers are compared against known facts or trusted retrieval sources; response attribution, where the model is asked to cite sources; and multi-pass validation, where multiple answers are generated for the same prompt to check for significant variance.

Authors

Sources

Referenced by nodes (2)