claim
Zhang et al. (2023) identified reliability in LLMs by examining tendencies regarding hallucination, truthfulness, factuality, honesty, calibration, robustness, and interpretability.
Authors
Sources
- Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org via serper
Referenced by nodes (6)
- Large Language Models concept
- hallucination concept
- interpretability concept
- robustness concept
- factuality concept
- honesty concept