claim
Attention matrix analysis evaluates hallucination in Large Language Models by checking if the attention patterns used to determine input importance are logical.

Authors

Sources

Referenced by nodes (2)