claim
The authors of the paper 'Survey and analysis of hallucinations in large language models' propose a probabilistic attribution framework for Large Language Model (LLM) hallucinations that introduces three new metrics: PS, MV, and JAS to quantify the contributions of prompts versus model behavior.

Authors

Sources

Referenced by nodes (1)