CREST framework
Facts (14)
Sources
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 13 facts
claimThe CREST framework is a practical NeuroSymbolic AI framework designed primarily for natural language processing applications.
claimThe authors propose the CREST framework for achieving trustworthiness in Large Language Models, which stands for Consistency, Reliability, user-level Explainability, and Safety.
procedureThe CREST framework evaluates explainability through two approaches: analyzing the 'Knowledge Concept to Word Attention Map' to verify alignment with domain knowledge, and using knowledge concepts and domain-specific decision guidelines to enable LLMs to generate human-understandable explanations.
claimThe authors of 'Building Trustworthy NeuroSymbolic AI Systems' plan to experiment with the CREST framework on knowledge-intensive language generation benchmarks, such as HELM.
claimThe authors of 'Building Trustworthy NeuroSymbolic AI Systems' intend to incorporate robust paraphrasing and adversarial generation techniques to assess the consistency and reliability of e-LLMs when exposed to knowledge.
claimThe authors of the paper 'Building Trustworthy NeuroSymbolic AI Systems' plan to develop more effective training methodologies for e-LLMs (expert-Large Language Models) powered by the CREST framework.
measurementIn a performance comparison on the PRIMATE dataset, the knowledge-powered CREST framework showed an improvement of 6% in PHQ-9 answerability and 21% in BLEURT scores compared to GPT-3.5.
procedureThe CREST framework enables LLMs to engage in anticipatory thinking through techniques including paraphrasing, adversarial inputs, knowledge integration, and fine-tuning based on instructions.
referenceThe CREST framework utilizes procedural and graph-based knowledge within a NeuroSymbolic framework to address the black-box nature and safety challenges associated with Large Language Models (LLMs).
procedureThe CREST framework evaluates safety by instructing knowledge-tailored e-LLMs to adhere to guidelines established by domain experts.
claimThe authors of 'Building Trustworthy NeuroSymbolic AI Systems' plan to research quantitative metrics that evaluate reliability, safety, and user-level explainability in e-LLMs.
claimThe CREST framework incorporates knowledge and utilizes knowledge-driven rewards to support e-LLMs in achieving trust.
claimThe e-LLMs (explainable Large Language Models) utilized within the CREST framework are Flan T5-XL (11B) and T5-XL (11B).
Building trustworthy NeuroSymbolic AI Systems: Consistency ... onlinelibrary.wiley.com Feb 14, 2024 1 fact
referenceThe CREST framework, introduced in the paper 'Building trustworthy NeuroSymbolic AI Systems: Consistency...', demonstrates how Consistency, Reliability, user-level Explainability, and Safety are built on NeuroSymbolic methods.