claim
Existing approaches for auditing large language models (LLMs) often focus on isolated aspects of model behavior, such as detecting specific biases or evaluating fairness, rather than understanding how outputs depend on each input token.
Authors
Sources
- Track: Poster Session 3 - aistats 2026 virtual.aistats.org via serper
Referenced by nodes (1)
- Large Language Models concept