concept

reasoning capabilities

Also known as: reasoning abilities, reasoning capability

Facts (13)

Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv Mar 12, 2026 2 facts
referenceThe paper 'Deepseek-r1: incentivizing reasoning capability in llms via reinforcement learning' (arXiv:2501.12948) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding reasoning capabilities.
procedureThe LLM training process consists of two primary stages: (1) Pre-Training, a massive-scale, self-supervised process where the model optimizes a next-token prediction objective to acquire linguistic knowledge and reasoning abilities; and (2) Supervised Fine-Tuning (SFT), where the pre-trained model is trained on a smaller, high-quality dataset of labeled instruction-response pairs to adapt to human intent.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 2 facts
measurementCognitive development and reasoning capabilities in Large Language Models have been assessed through cognitive maturity (Laverghetta Jr. & Licato, 2022), subjective similarity (Malloy et al., 2024), reasoning strategies (Mondorf & Plank, 2024; Yuan et al., 2023), decision-making (Ying et al., 2024), and Theory of Mind (Jung et al., 2024).
referenceZhang et al. (2024a) published 'Working memory identifies reasoning limits in language models' in the Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, which examines the relationship between working memory and reasoning capabilities in LLMs.
The Profound Interplay Between Sleep and Cognitive Function creyos.com Mackenzie Godard · Creyos Aug 14, 2025 2 facts
claimProper sleep supports reasoning abilities and memory consolidation, and protects against cognitive decline and conditions like Alzheimer's disease.
claimSleep plays an invaluable role in maintaining optimal cognitive performance and preserving brain health, including supporting reasoning abilities, memory consolidation, and protection against cognitive decline and Alzheimer's disease.
Large language models for intelligent RDF knowledge graph ... - PMC pmc.ncbi.nlm.nih.gov PMC Apr 25, 2025 1 fact
claimThe paper titled "Large language models for intelligent RDF knowledge graph" introduces a methodology that leverages the contextual understanding and reasoning capabilities of Large Language Models (LLMs).
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org arXiv Feb 16, 2025 1 fact
claimThe 'Neuro → Symbolic ← Neuro' model consistently outperforms other neuro-symbolic architectures across all evaluation metrics, including generalization, reasoning capabilities, transferability, and interpretability.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv Nov 2, 2025 1 fact
perspectiveHallucination resistance in specialized medical contexts emerges from sophisticated reasoning capabilities, internal consistency mechanisms, and broad world knowledge developed during large-scale pretraining, rather than from domain-specific fine-tuning.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 1 fact
claimFuture evaluation techniques for integrated knowledge graph and LLM systems should aim to measure complex aspects such as knowledge representation and reasoning capabilities, rather than relying solely on traditional performance metrics.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 1 fact
referenceYifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, and Kang Liu authored 'MenatQA: A new dataset for testing the temporal comprehension and reasoning abilities of large language models', published in the 2023 EMNLP proceedings.
A Comprehensive Benchmark and Evaluation Framework for Multi ... arxiv.org arXiv Jan 6, 2026 1 fact
referenceDeepSeek-AI published the DeepSeek-R1 technical report in 2025, detailing the use of reinforcement learning to incentivize reasoning capabilities in large language models.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature May 13, 2025 1 fact
referenceWei et al. (2023) demonstrated that chain-of-thought prompting elicits reasoning capabilities in large language models.