Relations (1)
related 4.17 — strongly supporting 17 facts
Large Language Models are explicitly designed to support reasoning capabilities as part of their core architecture [1], [2], and researchers actively evaluate these abilities through specialized benchmarks [3] and prompting techniques like Tree-of-Thought [4]. Furthermore, the integration of Knowledge Graphs is used to enhance these reasoning abilities [5], [6], while studies continue to explore the extent to which reasoning is an emergent property of these models [7], [8].
Facts (17)
Sources
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org 3 facts
claimTree-of-Thought (ToT) prompting allows LLMs to explore multiple reasoning paths simultaneously in a tree structure.
claimLarge Language Models are trained on large-scale transformers comprising billions of learnable parameters to support abilities including perception, reasoning, planning, and action.
claimKnowledge in Large Language Models (LLMs) is embedded within the model weights, which allows for more flexible and context-driven reasoning.
Understanding LLM Understanding skywritingspress.ca 2 facts
claimLarge language models generate coherent, grammatical text, which can lead to the perception that they are 'thinking machines' capable of abstract knowledge and reasoning.
perspectiveSome researchers argue that reasoning, understanding, and other human-like capacities may be emergent properties of large language models.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org 2 facts
referenceThe paper 'Towards reasoning era: a survey of long chain-of-thought for reasoning large language models' is an arXiv preprint, identified as arXiv:2503.09567.
referenceThe paper 'Training large language models to reason in a continuous latent space' (arXiv:2412.06769) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding reasoning.
Knowledge Graph Combined with Retrieval-Augmented Generation ... drpress.org 1 fact
claimIntegrating Knowledge Graphs (KGs) with Retrieval-Augmented Generation (RAG) enhances the knowledge representation and reasoning abilities of Large Language Models (LLMs) by utilizing structured knowledge, which enables the generation of more accurate answers.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org 1 fact
referenceDasgupta et al.'s 2022 paper 'Language models show human-like content effects on reasoning tasks' demonstrates that large language models exhibit reasoning patterns similar to humans.
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org 1 fact
referenceZhu et al. (2024b) authored 'Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities', published in World Wide Web, volume 27, issue 5, article 58.
LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com 1 fact
claimMonitoring latency alongside output quality helps identify the optimal performance balance for LLMs, as slight delays may indicate the model is performing more reasoning.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org 1 fact
claimLarge Language Models (LLMs) are trained on large-scale transformers comprising billions of learnable parameters to support agent abilities such as perception, reasoning, planning, and action.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com 1 fact
claimIn a synergized framework, Large Language Models use structured knowledge from Knowledge Graphs to improve reasoning and understanding, while Knowledge Graphs utilize the language production and contextual capabilities of Large Language Models.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com 1 fact
claimWhile structural constraints can guide reasoning in Large Language Models by enforcing a consistent format, strict enforcement of these constraints may hinder the model's ability to reason effectively.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org 1 fact
referenceXplainLLM (Chen et al., 2024d) is a question-answering dataset for Large Language Models and Knowledge Graphs that focuses on question-answering explainability and reasoning.
Applying Large Language Models in Knowledge Graph-based ... arxiv.org 1 fact
claimLuo et al. argue that Large Language Models are skilled at reasoning in complex tasks but struggle with up-to-date knowledge and hallucinations, which negatively impact performance and trustworthiness.
Combining large language models with enterprise knowledge graphs frontiersin.org 1 fact
referenceThe paper 'Planbench: an extensible benchmark for evaluating large language models on planning and reasoning about change' by Valmeekam et al. (2024) presents a benchmark designed to evaluate the planning and reasoning capabilities of large language models.