Relations (1)

related 3.00 — strongly supporting 7 facts

Large Language Models are directly linked to reasoning capabilities through research demonstrating that techniques like chain-of-thought prompting [1] and reinforcement learning [2] elicit or incentivize these abilities. Furthermore, studies explicitly examine the limits and development of reasoning capabilities within the architecture of Large Language Models [3], [4], [5], [6], and [7].

Facts (7)

Sources
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 2 facts
measurementCognitive development and reasoning capabilities in Large Language Models have been assessed through cognitive maturity (Laverghetta Jr. & Licato, 2022), subjective similarity (Malloy et al., 2024), reasoning strategies (Mondorf & Plank, 2024; Yuan et al., 2023), decision-making (Ying et al., 2024), and Theory of Mind (Jung et al., 2024).
referenceZhang et al. (2024a) published 'Working memory identifies reasoning limits in language models' in the Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, which examines the relationship between working memory and reasoning capabilities in LLMs.
Large language models for intelligent RDF knowledge graph ... - PMC pmc.ncbi.nlm.nih.gov PMC 1 fact
claimThe paper titled "Large language models for intelligent RDF knowledge graph" introduces a methodology that leverages the contextual understanding and reasoning capabilities of Large Language Models (LLMs).
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 1 fact
referenceThe paper 'Deepseek-r1: incentivizing reasoning capability in llms via reinforcement learning' (arXiv:2501.12948) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding reasoning capabilities.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv 1 fact
referenceYifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, and Kang Liu authored 'MenatQA: A new dataset for testing the temporal comprehension and reasoning abilities of large language models', published in the 2023 EMNLP proceedings.
A Comprehensive Benchmark and Evaluation Framework for Multi ... arxiv.org arXiv 1 fact
referenceDeepSeek-AI published the DeepSeek-R1 technical report in 2025, detailing the use of reinforcement learning to incentivize reasoning capabilities in large language models.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature 1 fact
referenceWei et al. (2023) demonstrated that chain-of-thought prompting elicits reasoning capabilities in large language models.