Relations (1)
related 0.20 — supporting 2 facts
Large Language Models and Prompt Sensitivity are related because the hallucination attribution framework provides Prompt Sensitivity (PS) as an interpretable quantitative score specifically for benchmarking and tracking improvements in Large Language Models [1], and developers use Prompt Sensitivity in attribution patterns to inform fine-tuning strategies for deploying Large Language Models [2].
Facts (2)
Sources
Survey and analysis of hallucinations in large language models frontiersin.org 2 facts
claimThe hallucination attribution framework provides interpretable quantitative scores, specifically Prompt Sensitivity (PS), Model Variability (MV), and Joint Attribution Score (JAS), which are used for benchmarking and tracking improvements in Large Language Models.
perspectiveFor developers deploying Large Language Models, selecting models based on attribution patterns (Prompt Sensitivity vs. Model Vulnerability) can inform fine-tuning strategies.