concept

reasoning

synthesized from dimensions

Reasoning is a fundamental cognitive and computational process defined as the systematic ability to draw conclusions, derive inferences, and construct justified beliefs by composing concepts, analyzing data, and evaluating relations between ideas. It serves as a primary mechanism for knowledge acquisition, enabling the transition from raw sensory input or disparate facts to structured understanding. Across both human cognition and artificial intelligence, reasoning is recognized not as an isolated faculty, but as a multifaceted capability that functions alongside perception, memory, and planning to navigate complex environments and solve problems.

Philosophically, reasoning is categorized by its logical structure and its relationship to truth. Traditional frameworks distinguish between deduction—where premises guarantee the truth of a conclusion deduction, where premises guarantee conclusion truth—and induction, which renders conclusions probable induction, making conclusions probable. Other forms, such as abduction, are often included in this spectrum. Epistemologically, reasoning is the bedrock of justified belief Internet Encyclopedia of Philosophy defines justified beliefs via sound reasoning and evidence and is considered an intellectual virtue essential for attaining truth. While rationalists emphasize reasoning as a source of eternal, abstract knowledge, empirical traditions integrate it with sensory experience, acknowledging that all knowledge requires reasoning to interpret and analyze sensory data All knowledge requires reasoning to analyze sensory data.

In the domain of artificial intelligence, reasoning is defined as the efficient composition of learned concepts to achieve specific goals Bengio's reasoning definition. Modern large language models (LLMs) support this through massive transformer architectures, often enhanced by techniques such as Chain-of-Thought (CoT) prompting CoT reasoning methods, which improves logical decision-making, and Tree-of-Thought (ToT) structures that allow models to explore multiple reasoning paths ToT prompting paths. Advanced approaches like ReAct further synthesize reasoning with external action ReAct method introduction, while neuro-symbolic AI (NSAI) architectures attempt to bridge the gap between neural learning and symbolic, rule-based logic NSAI reasoning reflection.

Despite these advancements, reasoning in AI remains a significant technical challenge. Critics, such as Gary Marcus, have argued that monolithic architectures may be insufficient for the abstraction required for robust reasoning 2019 arXiv-cited Montreal AI Debate argument, deemed monolithic architectures unrealistic for abstraction and reasoning. Furthermore, current systems frequently conflate reasoning with mere justification, leading to issues like opacity KG-CoT opacity issue and hallucinations, which are often categorized as failures of reasoning medical hallucination reasoning. The integration of knowledge graphs and ontologies is increasingly utilized to mitigate these errors by providing structured, verifiable frameworks for inference KG-RAG enhancement.

Ultimately, the significance of reasoning lies in its role as a bridge between information and action. Whether evaluated through expert-rated depth in complex tasks IKEDS recommendations achieving 85% expert-rated reasoning depth for indirect implications or through standardized benchmarks like Planbench Planbench by Valmeekam et al. benchmarks LLMs on planning and reasoning, the capacity for sophisticated reasoning remains the primary metric for assessing intelligence. As research continues, the field remains focused on overcoming the limitations of current models to achieve more reliable, transparent, and generalized reasoning capabilities.

Model Perspectives (3)
openrouter/x-ai/grok-4.1-fast definitive 92% confidence
Reasoning in AI contexts, as defined by Samy Bengio, Senior Director at Apple, involves drawing conclusions efficiently by composing learned concepts Bengio's reasoning definition. Large language models (LLMs) support reasoning alongside perception, planning, and action through massive transformer architectures LLM reasoning support. Techniques like Tree-of-Thought (ToT) prompting enable LLMs to explore multiple reasoning paths in tree structures ToT prompting paths, while Yao et al.'s ReAct synergizes reasoning and acting ReAct method introduction, and chain-of-thought (CoT) enhances logical decision-making CoT reasoning methods. Neuro-symbolic AI (NSAI) architectures combine neural learning with symbolic reasoning to analyze data and draw conclusions NSAI reasoning reflection. Knowledge graphs integrated with LLMs or RAG boost reasoning accuracy via structured knowledge KG-RAG enhancement, and ontologies enable inferencing ontologies for reasoning. Models like OpenAI's o1-preview spend more time thinking for complex tasks o1-preview reasoning. Challenges include KG-CoT opacity conflating reasoning with justifications KG-CoT opacity issue and hallucinations as reasoning failures medical hallucination reasoning. Symbolic AI excels in rule-based reasoning symbolic AI strengths, with neuro-symbolic approaches like Neuro Symbolic Neuro outperforming in reasoning criteria NSN architecture performance.
openrouter/x-ai/grok-4.1-fast definitive 88% confidence
Reasoning is a core cognitive process essential for knowledge acquisition, defined philosophically as aiming to gain knowledge through 'relations of ideas' and 'matters of fact' according to David Hume (Rebus Community; K. S. Sangeetha), and as one of two powers of cognition alongside sensibility per Immanuel Kant (The Collector). Logic studies correct reasoning (Wikipedia), with key forms including deduction, where premises guarantee conclusion truth (Rebus Community; Todd R. Long; K. S. Sangeetha), and induction, making conclusions probable (Rebus Community; Todd R. Long; K. S. Sangeetha); some philosophers extend induction to all non-deductive reasoning like abduction (Rebus Community; Todd R. Long). All knowledge requires reasoning to analyze sensory data (Internet Encyclopedia of Philosophy), with a priori knowledge relying solely on it for abstract facts (Cambodian Education Forum; Koemhong Sol, Kimkong Heng; Internet Encyclopedia of Philosophy) and a posteriori blending it with experience. In epistemology, careful reasoning exemplifies epistemic virtues (Stanford Encyclopedia of Philosophy; Matthias Steup, Ram Neta), John Greco lists it among intellectual virtues enabling truth attainment (Internet Encyclopedia of Philosophy), and replacement naturalism advocates studying human reasoning psychologically (Stanford Encyclopedia of Philosophy). Cognitive psychology examines reasoning alongside perception and memory (Klinikong; Fiveable), functionalism explains it via behavioral adaptation (Internet Encyclopedia of Philosophy), and the prefrontal cortex supports it though not exclusively for visual consciousness (Allen Institute; Liz Dueweke). In AI, Large Language Models support agent reasoning but face challenges like hallucinations and learning difficulties, as noted by Samy Bengio (Skywritings Press) and Luo et al. (arXiv; Benedikt Reitemeyer, Hans-Georg Fill); techniques like Chain-of-Thought prompting enhance LLM reasoning (arXiv). Rationalists view reasoning-derived knowledge as eternal (Rebus Community; K. S. Sangeetha), Xunzi integrated it with empirical standards (Wikipedia), and recent findings blur its separation from emotion (Journal of Psychoanalysis). Fallacies stem from incorrect reasoning per epistemic approaches (Wikipedia), and reliabilism assesses beliefs by reasoning type (Internet Encyclopedia of Philosophy).
openrouter/x-ai/grok-4.1-fast 75% confidence
Reasoning emerges as a central topic across cognitive science, AI development, philosophy, and knowledge systems based on the provided facts. The UQÀM Cognitive Science Institute hosted a 2016 summer school on Reasoning, alongside other cognition themes, indicating its status as a dedicated research area. In AI, effective medical AI demands sophisticated reasoning from general intelligence per medRxiv, while Planbench by Valmeekam et al. benchmarks LLMs on planning and reasoning as published in Frontiers. Gary Marcus, in a 2019 arXiv-cited Montreal AI Debate argument, deemed monolithic architectures unrealistic for abstraction and reasoning. Philosophically, the Internet Encyclopedia of Philosophy defines justified beliefs via sound reasoning and evidence. Empirically, Nature reports IKEDS recommendations achieving 85% expert-rated reasoning depth for indirect implications, outperforming baselines. Additionally, Frontiers highlights prioritized studies on Knowledge Graph reasoning challenges for novelty and dataset impact. These facts portray reasoning as a multifaceted capability essential for justification, AI evaluation, and complex inference.

Facts (117)

Sources
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 8 facts
claimTree-of-Thought (ToT) prompting allows LLMs to explore multiple reasoning paths simultaneously in a tree structure.
claimLarge Language Models are trained on large-scale transformers comprising billions of learnable parameters to support abilities including perception, reasoning, planning, and action.
claimKnowledge in Large Language Models (LLMs) is embedded within the model weights, which allows for more flexible and context-driven reasoning.
claimNeuro-vector-symbolic architectures are a proposed future direction in AI that integrates vector manipulation to enhance the reasoning capabilities of agents.
claimLLM-empowered Autonomous Agents demonstrate advanced reasoning, planning, and decision-making abilities.
claimSymbolic AI is a paradigm that emphasizes symbolic representation and logic, utilizing rule-based systems to perform reasoning and decision-making tasks.
procedureProgram-proof-of-thoughts (P2oT) prompting is a future direction in AI that breaks down complex reasoning processes into verifiable propositions, utilizing program proof languages such as Dafny for structured verification.
claimThe Chain-of-Thought (CoT) method enhances the cognitive task performance of LLM-empowered agents by guiding the models to generate text about intermediate reasoning steps.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org arXiv Feb 16, 2025 7 facts
claimReasoning in Neuro-Symbolic AI (NSAI) architectures reflects the model's ability to analyze data, extract insights, and draw logical conclusions by combining neural learning with symbolic reasoning.
procedureThe study evaluates Neuro Symbolic Neuro architectures against criteria including generalization, scalability, data efficiency, reasoning, robustness, transferability, and interpretability.
claimReasoning and inference methods, such as chain-of-thought (CoT) reasoning and link prediction, enhance the logical decision-making capabilities of AI systems.
claimThe Neuro Symbolic Neuro architecture is the best-performing model, consistently achieving high ratings across data efficiency, reasoning, robustness, transferability, and interpretability criteria.
claimSymbolic AI is characterized by strengths in reasoning and interpretability, whereas neural AI is characterized by strengths in learning from vast amounts of data.
claimNeuro-symbolic artificial intelligence (NSAI) aims to enhance generalization, reasoning, and scalability in AI systems while addressing challenges related to transparency and data efficiency.
quoteGary Marcus argued during the 2019 Montreal AI Debate that 'expecting a monolithic architecture to handle abstraction and reasoning is unrealistic,' emphasizing the limitations of current AI systems.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv Jul 11, 2024 7 facts
referenceShunyu Yao et al. published 'React: Synergizing reasoning and acting in language models' as an arXiv preprint (arXiv:2210.03629) in 2022.
referenceLauren Nicole DeLong, Ramon Fernández Mir, and Jacques D Fleuriot conducted a survey on neurosymbolic AI techniques for reasoning over knowledge graphs.
referenceThe architecture of an LAA consists of a neural sub-system (LLM) acting as a core controller, which orchestrates a symbolic sub-system and external tools, including components for planning, reasoning, memory, and tool-use.
claimThe synergy of Ontologies and Markov-logic networks improved the ability of symbolic AI to perform robust reasoning over large datasets.
claimLarge Language Models (LLMs) are trained on large-scale transformers comprising billions of learnable parameters to support agent abilities such as perception, reasoning, planning, and action.
referenceRonald Brachman and Hector Levesque authored a foundational text on knowledge representation and reasoning.
referenceArtur d’Avila Garcez et al. discussed the contributions and challenges of neural-symbolic learning and reasoning.
Understanding LLM Understanding skywritingspress.ca Skywritings Press Jun 14, 2024 6 facts
claimSamy Bengio, the Senior Director of AI and Machine Learning Research at Apple, defines reasoning as the action of drawing conclusions efficiently by composing learned concepts.
claimEva Portelance studies how both humans and machines learn to understand language and reason about complex problems.
claimLarge language models generate coherent, grammatical text, which can lead to the perception that they are 'thinking machines' capable of abstract knowledge and reasoning.
claimSamy Bengio from Apple presented on the difficulty of learning to reason at the 'Understanding LLM Understanding' summer school.
perspectiveSome researchers argue that reasoning, understanding, and other human-like capacities may be emergent properties of large language models.
accountThe UQÀM Cognitive Science Institute hosted summer schools on various topics including Categorization (2003), Social Cognition (2008), Origin of Language (2010), Origin and Function of Consciousness (2012), Web Science and the Mind (2014), Reasoning (2016), The Other Minds Problem: Animal Sentience and Cognition (2018), and Cognitive Challenges of Climate Change (2021).
Epistemology - Wikipedia en.wikipedia.org Wikipedia 5 facts
claimLogic is defined as the study of correct reasoning.
claimXunzi (c. 310–220 BCE) aimed to combine empirical observation and rational inquiry, emphasizing the importance of clarity and standards of reasoning without excluding the role of feeling and emotion.
claimThe epistemic approach to fallacies defines fallacies as faulty arguments based on incorrect reasoning and asserts that an argument is a fallacy if it fails to expand knowledge.
referenceRobert Morrison contributed to 'The Cambridge Handbook of Thinking and Reasoning', published by Cambridge University Press in 2005.
claimWhether an inferential belief amounts to knowledge depends on the form of reasoning used, specifically that the process does not violate the laws of logic.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 4 facts
referenceCao and Liu (2023) proposed RELMKG, a method for reasoning with pre-trained language models and knowledge graphs for complex question answering, published in Applied Intelligence.
claimThe KG-CoT framework (Zhao et al., 2024) suffers from opacity because its generated rationales often conflate genuine reasoning with post-hoc justifications.
claimText corpora may lack structure and factual consistency, which creates challenges for performing precise knowledge extraction and reasoning.
accountThe authors prioritized studies addressing fundamental challenges in Knowledge Graph construction, embedding, and reasoning, evaluating them based on methodological novelty and impact on standard datasets.
Epistemology | Internet Encyclopedia of Philosophy iep.utm.edu Internet Encyclopedia of Philosophy 4 facts
claimAll knowledge requires reasoning, as data must be analyzed and inferences must be drawn from sensory input.
claimReliabilism evaluates beliefs by identifying the specific cognitive process that led to their formation, such as the specific sense used, the source of testimony, the type of reasoning, or the recency of a memory.
claimKnowledge of abstract or non-empirical facts relies exclusively on reasoning.
claimA belief is considered justified if it is obtained in the right way, which typically involves sound reasoning and solid evidence rather than luck or misinformation.
Epistemic Justification – Introduction to Philosophy: Epistemology press.rebus.community Todd R. Long · Rebus Community 3 facts
claimSome philosophers use the term 'induction' to encompass any non-deductive form of reasoning, including abduction.
referenceDeduction is a form of reasoning in which the truth of the premises logically guarantees the truth of the conclusion.
referenceInduction is a form of reasoning in which the truth of the premises makes the truth of the conclusion probable.
Applying Large Language Models in Knowledge Graph-based ... arxiv.org Benedikt Reitemeyer, Hans-Georg Fill · arXiv Jan 7, 2025 3 facts
claimOntologies use formal notation and axioms to enable reasoning and inferencing, which allows for the derivation of new knowledge.
claimKnowledge graphs can derive new knowledge through reasoning and describe real-world entities from open knowledge bases (such as DBpedia, schema.org, or YAGO) or organization-specific entities.
claimLuo et al. argue that Large Language Models are skilled at reasoning in complex tasks but struggle with up-to-date knowledge and hallucinations, which negatively impact performance and trustworthiness.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv Mar 12, 2026 3 facts
claimWang et al. (2025d) found that factual question answering tasks demonstrate the strongest memorization effect, which increases with model size, whereas tasks like machine translation and reasoning exhibit greater generalization.
referenceThe paper 'Towards reasoning era: a survey of long chain-of-thought for reasoning large language models' is an arXiv preprint, identified as arXiv:2503.09567.
referenceThe paper 'Training large language models to reason in a continuous latent space' (arXiv:2412.06769) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding reasoning.
The Year of Neuro-Symbolic AI: How 2026 Makes Machines Actually ... cogentinfo.com Cogent Infotech Dec 30, 2025 3 facts
claimTraditional predictive artificial intelligence systems lack the depth required for contextual interpretation and reasoning.
claimNeuro-symbolic AI architecture separates learning from reasoning, avoiding the need for brute-force data ingestion by layering structured reasoning atop adaptive learning.
claimA neuro-symbolic system separates perception from reasoning, ensuring that real-world inputs are transformed into structured intelligence before any decision is made, which allows the system to explain its choices, maintain compliance, and adapt to complexity.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com Aritra Biswas, Noé Vernier · Datadog Aug 25, 2025 3 facts
claimSLM-as-a-judge approaches for hallucination detection often fail in complex use cases, particularly when the context and answer are large and involve layers of reasoning.
claimWhile structural constraints can guide reasoning in Large Language Models by enforcing a consistent format, strict enforcement of these constraints may hinder the model's ability to reason effectively.
procedureDatadog's approach to hallucination detection involves enforcing structured output and guiding reasoning through explicit prompts.
Sources of Knowledge: Rationalism, Empiricism, and the Kantian ... press.rebus.community K. S. Sangeetha · Rebus Community 2 facts
claimDavid Hume asserts that reasoning aims to gain knowledge of the world through two methods: "relations of ideas" and "matters of fact."
claimRationalists argue that knowledge accessed through reasoning is eternal, meaning it exists unchanged throughout the past, present, and future.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer Dec 9, 2025 2 facts
referenceYi, K., Gan, C., Li, Y., Kohli, P., Wu, J., Torralba, A., and Tenenbaum, J.B. introduced 'CLEVRER', a dataset for collision events in video representation and reasoning.
referenceK. Xu, A. Srivastava, D. Gutfreund, F. Sosa, T. Ullman, J. Tenenbaum, and C. Sutton published 'A Bayesian-symbolic approach to reasoning and learning in intuitive physics' in the Advances in Neural Information Processing Systems (NeurIPS) proceedings in 2021.
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org arXiv Oct 23, 2025 2 facts
referenceZhu et al. (2024b) authored 'Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities', published in World Wide Web, volume 27, issue 5, article 58.
claimKnowledge Graphs serve as a fundamental infrastructure for structured knowledge representation and reasoning, providing a unified semantic foundation for applications such as semantic search, question answering, and scientific discovery.
Understanding epistemology and its key approaches in research cefcambodia.com Koemhong Sol, Kimkong Heng · Cambodian Education Forum Jan 21, 2023 2 facts
claimA posteriori knowledge depends on specific sensory experiences and the use of reasoning, such as knowledge of colors, shapes, and natural sciences.
claimA priori knowledge is justified independently of any experience and relies solely on reasoning, such as the mathematical statement 5 + 5 = 10.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org arXiv Nov 7, 2024 2 facts
referenceFadi Al Machot (2023) introduced ASPER, a neural-symbolic approach for enhanced reasoning in neural models, published as an arXiv preprint.
referenceStehr et al. (2022) proposed a probabilistic approximate logic framework for neuro-symbolic learning and reasoning.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv Nov 2, 2025 2 facts
perspectiveThe authors of the study 'Medical Hallucination in Foundation Models and Their Impact on ...' argue that medical hallucination is a reasoning-driven failure mode rather than a knowledge deficit, and that safety emerges from sophisticated reasoning capabilities and broad knowledge integration rather than narrow optimization.
perspectiveEffective medical AI may require sophisticated reasoning and knowledge integration capabilities that emerge from large-scale general intelligence development rather than narrow domain optimization.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 2 facts
claimOpenAI's o1-preview model was introduced in September 2024 and is designed to spend more time thinking before responding to enhance reasoning capabilities for complex tasks.
claimOpenAI's o3-mini model was introduced in January 2025 and is designed to spend more time thinking before responding to enhance reasoning capabilities for complex tasks.
LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com TTMS Feb 10, 2026 2 facts
claimMonitoring latency alongside output quality helps identify the optimal performance balance for LLMs, as slight delays may indicate the model is performing more reasoning.
referenceAn LLM trace is a concept in LLM observability that records the sequence of events and decisions related to a single AI task, including the original user prompt, system or context prompts, raw model output, and step-by-step reasoning if tools or agent frameworks are used.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 2 facts
claimIn a synergized framework, Large Language Models use structured knowledge from Knowledge Graphs to improve reasoning and understanding, while Knowledge Graphs utilize the language production and contextual capabilities of Large Language Models.
referenceCommonsenseQA is a benchmark that evaluates the ability of models to use commonsense knowledge to answer questions, testing reasoning capabilities regarding everyday scenarios.
LLM-Powered Knowledge Graphs for Enterprise Intelligence and ... arxiv.org arXiv Mar 11, 2025 2 facts
claimUnified knowledge graphs offer the ability to model relationships between disparate data facets, allowing for cohesive reasoning and representation.
procedureThe framework automates entity extraction, relationship inference, and semantic enrichment to enable querying, reasoning, and analytics across diverse data types including emails, calendars, chats, documents, and logs.
Neuro-symbolic AI - Wikipedia en.wikipedia.org Wikipedia 2 facts
referenceArtur d'Avila Garcez, Marco Gori, Luis C. Lamb, Luciano Serafini, Michael Spranger, and Son N. Tran published 'Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning', arguing for a principled approach to combining these fields.
quoteGary Marcus stated: "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning."
Knowledge Graphs: Opportunities and Challenges - Springer Nature link.springer.com Springer Apr 3, 2023 2 facts
claimKnowledge graph-based information retrieval offers the advantage of semantic representation of items, where items are represented via a formal and interlinked model that supports semantic similarity, reasoning, and query expansion, leading to improved interpretability and relevance.
claimWan G, Pan S, Gong C et al published the paper 'Reasoning like human: hierarchical reinforcement learning for knowledge graph reasoning' in the Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence in 2021.
Evolutionary Psychology | Internet Encyclopedia of Philosophy iep.utm.edu Internet Encyclopedia of Philosophy 2 facts
referenceLeda Cosmides and John Tooby authored 'Reasoning and Natural Selection,' which was published in the Encyclopedia of Human Biology in 1991.
referencePeter Wason published 'Reasoning' in the book 'New Horizons in Psychology' in 1966.
Virtue Epistemology | Internet Encyclopedia of Philosophy iep.utm.edu Internet Encyclopedia of Philosophy 1 fact
claimJohn Greco defines intellectual virtues as innate faculties or acquired habits, such as perception, reliable memory, and good reasoning, that enable a person to reach truth and avoid error in a relevant field.
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers Sep 29, 2025 1 fact
referenceYao et al. (2022) introduced 'ReAct,' a method for synergizing reasoning and acting in language models.
Knowledge Graph Combined with Retrieval-Augmented Generation ... drpress.org Academic Journal of Science and Technology Dec 2, 2025 1 fact
claimIntegrating Knowledge Graphs (KGs) with Retrieval-Augmented Generation (RAG) enhances the knowledge representation and reasoning abilities of Large Language Models (LLMs) by utilizing structured knowledge, which enables the generation of more accurate answers.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org arXiv Jul 1, 2025 1 fact
referenceDasgupta et al.'s 2022 paper 'Language models show human-like content effects on reasoning tasks' demonstrates that large language models exhibit reasoning patterns similar to humans.
Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org arXiv Jun 14, 2024 1 fact
referenceThe paper 'Llava-next: Improved reasoning, ocr, and world knowledge' by Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee, published in 2024, discusses improvements in reasoning, OCR, and world knowledge for the Llava-next model.
Dualism, Physicalism, and Philosophy of Mind - Capturing Christianity capturingchristianity.com Capturing Christianity Dec 11, 2019 1 fact
claimArguments against physicalism include those based on intentionality (the 'aboutness' of thoughts), the 'unity of consciousness,' and the human ability to reason.
Rationalism Vs. Empiricism 101: Which One is Right? - TheCollector thecollector.com The Collector Nov 9, 2023 1 fact
claimImmanuel Kant posits that human cognition relies on two essential powers: perceiving (sensibility) and understanding (reasoning), and that knowledge is impossible without both faculties.
Psychology and Cognitive Science on Consciousness klinikong.com Klinikong 1 fact
claimCognitive psychology examines internal mental processes, including perception, memory, reasoning, and decision-making.
Renewable Energy's Land Use Reckoning kleinmanenergy.upenn.edu Kleinman Center for Energy Policy Jun 3, 2025 1 fact
referenceREZoning is a web-based spatial planning tool developed by the Multi Criteria Analysis for Planning Renewable Energy Initiative in collaboration with the World Bank's Electricity Sector Management Assistance Program (ESMAP).
A Comprehensive Benchmark and Evaluation Framework for Multi ... arxiv.org arXiv Jan 6, 2026 1 fact
referenceGheorghe Comanici et al. published a technical report on Gemini 2.5 in 2025, highlighting its advanced reasoning, multimodality, long context, and agentic capabilities.
The Integration of Symbolic and Connectionist AI in LLM-Driven ... econpapers.repec.org Ankit Sharma · Journal of Artificial Intelligence General science 1 fact
claimSymbolic AI is characterized by structured, rule-based logic and excels at encoding explicit knowledge and facilitating reasoning.
Parenting styles: An evidence-based, cross-cultural guide parentingscience.com Parenting Science 1 fact
referenceThe Parenting Styles and Dimensions Questionnaire (PDSQ) includes items measuring reasoning, such as 'I give my child reasons why rules should be obeyed' and 'I help my child understand the impact of behavior by encouraging my child to talk about the consequents of his/her own actions,' which are associated with authoritative parenting.
Unknown source 1 fact
referenceThe paper titled 'A review of neuro-symbolic AI integrating reasoning and learning for ...' analyzes the current state of neuro-symbolic AI by emphasizing techniques that integrate reasoning and learning.
Hard Problem of Consciousness | Internet Encyclopedia of Philosophy iep.utm.edu Internet Encyclopedia of Philosophy 1 fact
claimPsychological phenomena such as learning, reasoning, and remembering are explained by their functional roles, where a system is defined by its ability to alter behavior appropriately in response to environmental stimulation.
Naturalized Epistemology - Stanford Encyclopedia of Philosophy plato.stanford.edu Stanford Encyclopedia of Philosophy Jul 5, 2001 1 fact
claimReplacement naturalism is a view within naturalized epistemology that recommends replacing traditional epistemology with the psychological study of how humans reason.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com GitHub 1 fact
referenceThe paper titled 'Synergizing RAG and Reasoning: A Systematic Review' was published on arXiv in 2025.
Social Epistemology - Open Encyclopedia of Cognitive Science oecs.mit.edu MIT Press Jul 24, 2024 1 fact
claimWhen someone offers testimony, there is a general expectation that they are able to survey their own reasoning, transfer it to others via explanation, and detect and trace flaws in it when these are pointed out.
Consciousness and Cognitive Sciences journal-psychoanalysis.eu Journal of Psychoanalysis 1 fact
claimRecent research indicates that the separation between reasoning and emotions is disappearing, with evidence highlighting the importance of the amygdala, the lateralization of emotional processes, and the role of arousal in emotional memory.
What Is Epistemology? Pt. 3: The Nature of Justification and Belief philosimplicity.com Philosimplicity Oct 23, 2017 1 fact
claimRené Descartes embraced internalism when he utilized radical skepticism and reasoning to attempt to understand the world.
Epistemology (Stanford Encyclopedia of Philosophy/Fall 2019 Edition) plato.stanford.edu Stanford Encyclopedia of Philosophy Dec 14, 2005 1 fact
claimCareful and attentive reasoning is an example of an epistemic virtue, while jumping to conclusions is an example of an epistemic vice.
Epistemological Problems of Testimony plato.stanford.edu Stanford Encyclopedia of Philosophy Apr 1, 2021 1 fact
referenceTyler Burge published the essay 'Postscript: Content Preservation' in the book 'Cognition Through Understanding: Self-Knowledge, Interlocution, Reasoning, Reflection: Philosophical Essays, Volume 3' in 2013.
Naturalized epistemology and cognitive science | Intro to... - Fiveable fiveable.me Fiveable 1 fact
claimCognitive psychology focuses on mental processes including perception, attention, memory, and reasoning.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 1 fact
referenceXplainLLM (Chen et al., 2024d) is a question-answering dataset for Large Language Models and Knowledge Graphs that focuses on question-answering explainability and reasoning.
Influence of behavioral biases on investment decisions. The ... revistas.usc.gal Revistas USC 1 fact
referenceK. Stanovich and R. West's 2000 paper 'Individual differences in reasoning: Implications for the rationality debate?' explores variations in human reasoning and rationality.
Landmark experiment sheds new light on the origins of consciousness alleninstitute.org Liz Dueweke · Allen Institute 1 fact
claimThe study suggests that while the prefrontal cortex is important for reasoning and planning, it may not be the primary hub for all visual specifics of conscious experience.
7.1 What Epistemology Studies - Introduction to Philosophy | OpenStax openstax.org OpenStax Jun 15, 2022 1 fact
claimInference is defined as a stepwise process of reasoning that moves from one idea to another.
Neuro-Symbolic AI: The Hybrid Future of Intelligent Systems - LinkedIn linkedin.com Leo Akin-Odutola · LinkedIn Aug 26, 2025 1 fact
claimNeuro-symbolic systems are designed using insights from human cognition and neuroscience, which influences how perception, reasoning, and abstraction are integrated into these systems.
Neurodiversity in Practice: a Conceptual Model of Autistic Strengths ... link.springer.com Springer Jul 25, 2023 1 fact
referenceBaron-Cohen et al. (2009) proposed the hyper-systematizing theory, which argues that the excellent attention to detail and reasoning of autistic individuals produces talent in system domains such as mathematics, music, and language.
Empowering GraphRAG with Knowledge Filtering and Integration arxiv.org arXiv Mar 18, 2025 1 fact
referenceSun et al. authored 'Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph', published in The Twelfth International Conference on Learning Representations.
KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org arXiv May 20, 2024 1 fact
claimThe brain of an AI agent serves as the decision-making core responsible for reasoning, planning, and storing the agent’s knowledge and memories.
The State Of The Art On Knowledge Graph Construction From Text nlpsummit.org NLP Summit 1 fact
claimNandana Mihindukulasooriya's research interests include relation extraction and linking, information extraction, knowledge representation and reasoning, and Neuro-Symbolic AI.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 1 fact
claimCognitive psychology is applied across all stages of Large Language Model development, specifically for modeling internal mechanisms such as reasoning, memory, and attention.
Papers - Dr Vaishak Belle vaishakbelle.github.io 1 fact
referenceVaishak Belle authored 'Implicit Learning as Reasoning: Language-Agnostic Semantics for Learning from Partial Observations', published at IJCLR in 2024.
Combining large language models with enterprise knowledge graphs frontiersin.org Frontiers Aug 26, 2024 1 fact
referenceThe paper 'Planbench: an extensible benchmark for evaluating large language models on planning and reasoning about change' by Valmeekam et al. (2024) presents a benchmark designed to evaluate the planning and reasoning capabilities of large language models.
Construction of intelligent decision support systems through ... - Nature nature.com Nature Oct 10, 2025 1 fact
measurementExperts rated IKEDS recommendations as having an 85% depth rating for reasoning, compared to 47–61% for baseline approaches, noting the consideration of indirect implications and long-term consequences.