Relations (1)
Facts (6)
Sources
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org 2 facts
claimChain-of-Thought (CoT) prompting improves problem-solving accuracy and reliability in LLMs by enabling coherent, step-by-step elaboration of thought processes.
claimThe Chain-of-Thought (CoT) method enhances the cognitive task performance of LLM-empowered agents by guiding the models to generate text about intermediate reasoning steps.
The construction and refined extraction techniques of knowledge ... nature.com 1 fact
procedureThe ablation study framework for evaluating knowledge extraction models includes five variants: (1) Full Model, which integrates BM-LoRA, TL-LoRA, TA-LoRA, RAG, and CoT; (2) w/o TA-LoRA, which excludes the Task-Adaptive LoRA module; (3) w/o RAG, which disables Retrieval-Augmented Generation; (4) w/o CoT, which removes Chain-of-Thought prompting; and (5) Rule-based Only, which uses only rule-based systems and ontological constraints.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org 1 fact
procedureThe 'CoT' (Chain-of-Thought) evaluation method involves appending the phrase 'Let’s think step by step.' to each question to encourage the LLM to articulate its reasoning process explicitly.
Survey and analysis of hallucinations in large language models frontiersin.org 1 fact
claimStructured prompt strategies, such as chain-of-thought (CoT) prompting, significantly reduce hallucinations in prompt-sensitive scenarios, although intrinsic model limitations persist in some cases.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com 1 fact
claimChain of Thought (CoT) prompting generally enhances the reasoning abilities of large language models.