Relations (1)
Facts (3)
Sources
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org 1 fact
claimChain-of-Thought (CoT) prompting improves problem-solving accuracy and reliability in LLMs by enabling coherent, step-by-step elaboration of thought processes.
Survey and analysis of hallucinations in large language models frontiersin.org 1 fact
procedureThe experimental pipeline evaluates hallucinations in open-source LLMs by integrating benchmark datasets, varied prompt strategies (zero-shot, few-shot, CoT), and text generation via HuggingFace.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com 1 fact
claimChain of Thought (CoT) prompting generally enhances the reasoning abilities of large language models.