Relations (1)
related 2.32 — strongly supporting 4 facts
The CREST framework is specifically designed to enhance the trustworthiness, safety, and explainability of Large Language Models as described in [1] and [2]. Furthermore, the framework provides procedural methods for LLMs to generate human-understandable explanations and engage in anticipatory thinking, as detailed in [3] and [4].
Facts (4)
Sources
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 4 facts
claimThe authors propose the CREST framework for achieving trustworthiness in Large Language Models, which stands for Consistency, Reliability, user-level Explainability, and Safety.
procedureThe CREST framework evaluates explainability through two approaches: analyzing the 'Knowledge Concept to Word Attention Map' to verify alignment with domain knowledge, and using knowledge concepts and domain-specific decision guidelines to enable LLMs to generate human-understandable explanations.
procedureThe CREST framework enables LLMs to engage in anticipatory thinking through techniques including paraphrasing, adversarial inputs, knowledge integration, and fine-tuning based on instructions.
referenceThe CREST framework utilizes procedural and graph-based knowledge within a NeuroSymbolic framework to address the black-box nature and safety challenges associated with Large Language Models (LLMs).