Relations (1)

related 5.17 — strongly supporting 35 facts

Chain-of-thought is a prompting technique specifically designed to enhance the reasoning abilities, problem-solving accuracy, and transparency of Large Language Models [1], [2], [3]. It functions by guiding these models to generate intermediate reasoning steps [1], [4], which helps mitigate limitations such as hallucinations and token-level constraints [5], [6].

Facts (35)

Sources
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers 7 facts
claimChain-of-Thought prompting and Instruction-based inputs are effective for mitigating hallucinations in Large Language Models but are insufficient in isolation.
claimChain-of-thought prompting reduces reasoning and factual QA errors in large language models with high feasibility for implementation.
claimPrompt engineering, particularly Chain-of-Thought (CoT) prompting, reduces hallucination rates in large language models but is not universally effective.
referenceWang et al. (2022) demonstrated that the self-consistency method improves chain-of-thought reasoning performance in large language models.
claimChain-of-Thought (CoT) prompting (Wei et al., 2022) improves reasoning transparency and factual correctness in large language models by encouraging step-wise output generation.
claimChain-of-Thought and instruction prompts significantly reduce hallucination rates across all large language models.
claimResearchers have attempted to reduce hallucinations in Large Language Models using prompting techniques including chain-of-thought prompting, self-consistency decoding, retrieval-augmented generation, and verification-based refinement.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv 6 facts
referenceLi et al. (2025a) proposed CoT-RAG, a framework that integrates chain of thought reasoning and retrieval-augmented generation to enhance reasoning capabilities in large language models (arXiv:2504.13534).
claimRuilin Zhao, Feng Zhao, Long Wang, Xianzhi Wang, and Guandong Xu published the paper 'KG-CoT: Chain-of-thought prompting of large language models over knowledge graphs for knowledge-aware question answering' in 2024.
referenceShirdel et al. (2025) published 'AprèsCoT: Explaining LLM answers with knowledge graphs and chain of thought' in EDBT, pages 1142–1145, introducing a method for explaining LLM outputs using knowledge graphs and chain-of-thought reasoning.
referenceKGQA (Ji et al., 2024) integrates Chain-of-Thought (CoT) prompting with graph retrieval to enhance retrieval quality and multi-hop reasoning capabilities of Large Language Models in Question Answering tasks.
referenceWang et al. (2023) introduced 'keqing', a knowledge-based question answering framework that acts as a chain-of-thought mentor for large language models.
procedureStructure-aware retrieval and reranking methods should be employed to identify subgraphs consistent with gold subgraphs, and Chain-of-Thought (CoT) prompting can guide Large Language Models in generating explicit reasoning steps grounded in retrieved subgraphs.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 4 facts
claimLooped architectures in large language models can simulate Chain-of-Thought (CoT) internally through 'latent thoughts', which can efficiently substitute for explicit token generation.
claimThe inference-time scaling paradigm in Large Language Models is established through the Chain-of-Thought (CoT) mechanism and external search-based algorithms that extend the model's thinking process, as cited by Wei et al. (2022d), Yao et al. (2024a), Kang et al. (2024), Zhang et al. (2024a), and Feng et al. (2023b).
claimChain-of-thought (CoT) reasoning has significantly increased the expressive power of large language models, leading researchers to investigate how to implicitly incorporate iterative reasoning into a model's inductive bias.
referenceThe paper 'Towards reasoning era: a survey of long chain-of-thought for reasoning large language models' is an arXiv preprint, identified as arXiv:2503.09567.
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 3 facts
claimChain-of-Thought (CoT) prompting improves problem-solving accuracy and reliability in LLMs by enabling coherent, step-by-step elaboration of thought processes.
referenceChain-of-thought prompting as a method to elicit reasoning in large language models was introduced by Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. in the 2022 Advances in Neural Information Processing Systems paper 'Chain-of-thought prompting elicits reasoning in large language models'.
claimChain-of-Thought (CoT) and Tree-of-Thoughts (ToT) reasoning mechanisms mitigate the limitations of token-level constraints in Large Language Models (LLMs).
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv 2 facts
claimThe Chain-of-Thought (CoT) method guides large language models to generate text about intermediate reasoning steps, which structures reasoning systematically and improves cognitive task performance, problem-solving accuracy, and reliability.
claimTree-of-Thought (ToT) prompting extends the Chain-of-Thought approach by allowing large language models to explore multiple reasoning paths simultaneously within a tree structure.
KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org arXiv 2 facts
claimPrompt engineering techniques, including Chain of Thought (CoT), Tree of Thought (ToT), Graph of Thoughts (GoT), and ReAct (Reason and Act), have demonstrated significant improvements in the reasoning abilities and task-specific actions of Large Language Models.
claimPrompt engineering techniques, including Chain of Thought (CoT), Tree of Thought (ToT), Graph of Thoughts (GoT), and ReAct (Reason and Act), have demonstrated significant improvements in the reasoning abilities and task-specific actions of Large Language Models.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 2 facts
claimTechniques for reducing anchoring and confirmation bias in clinical settings, such as prompting systematic consideration of differential diagnoses, may inform prompt design or chain-of-thought strategies in Large Language Models, according to Wang and Zhang (2024b).
claimChain-of-Thought (CoT) prompting strategies can encourage step-by-step output generation in Large Language Models.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature 2 facts
claimChain of Thought (CoT) prompting generally enhances the reasoning abilities of large language models.
referenceWei et al. (2023) demonstrated that chain-of-thought prompting elicits reasoning capabilities in large language models.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org arXiv 1 fact
claimPrompt engineering techniques, including Chain-of-Thought (CoT) prompting, zero-shot prompting, and few-shot prompting, enable Large Language Models (LLMs) to reason and generalize across diverse tasks without requiring extensive retraining.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 1 fact
referenceThe paper 'Kg-cot: chain-of-thought prompting of large language models over knowledge graphs for knowledge-aware question answering' was published in the Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24) in 2024.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 1 fact
claimYang et al. (2023) developed 'PsyCoT', a method that uses psychological questionnaires as a chain-of-thought mechanism for personality detection in Large Language Models, published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
Applying Large Language Models in Knowledge Graph-based ... arxiv.org Benedikt Reitemeyer, Hans-Georg Fill · arXiv 1 fact
referenceWei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., and Zhou, D. published the paper 'Chain-of-thought prompting elicits reasoning in large language models' in the 2022 Advances in Neural Information Processing Systems.
How to Improve Multi-Hop Reasoning With Knowledge Graphs and ... neo4j.com Neo4j 1 fact
perspectiveChain-of-thought reasoning in LLMs is not the most user-friendly technique because response latency can be high due to the requirement for multiple LLM calls.
Unknown source 1 fact
referenceThe research paper titled 'CoT-RAG: Integrating Chain of Thought and Retrieval-Augmented Generation to Enhance Reasoning in Large Language Models' proposes a method that combines Chain of Thought prompting with Retrieval-Augmented Generation to improve the reasoning capabilities of large language models.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 1 fact
claimMethods like chain of thoughts and tree of thoughts prompting can act as sanity checks to examine the deceptive nature of Large Language Models (Connor Leahy 2023; Yao et al. 2023a).