procedure
The CREST framework evaluates explainability through two approaches: analyzing the 'Knowledge Concept to Word Attention Map' to verify alignment with domain knowledge, and using knowledge concepts and domain-specific decision guidelines to enable LLMs to generate human-understandable explanations.
Authors
Sources
- Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- CREST framework concept