concept

iterative reasoning

Also known as: iterative reasoning mechanism

Facts (12)

Sources
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 4 facts
claimIncorporating knowledge graphs with LLMs enables multi-hop and iterative reasoning over factual knowledge graphs, which augments the reasoning capability of LLMs for complex question answering.
claimStrategies to mitigate the quadratic computational growth of iterative reasoning in LLM+KG systems include caching subgraphs, reusing intermediate embeddings, and exploiting incremental-computation friendly hardware.
claimFusing knowledge from LLMs and Knowledge Graphs augments question decomposition in multi-hop Question Answering, facilitating iterative reasoning to generate accurate final answers.
referenceKG-IRAG, as described by Yang et al. (2025), utilizes incremental retrieval and iterative reasoning with Llama-3-8B-Instruct, GPT-3.5-Turbo, GPT-4o-mini, and GPT-4o models on self-constructed knowledge graphs for temporal QA tasks.
Leveraging Knowledge Graphs and LLM Reasoning to Identify ... arxiv.org arXiv Jul 23, 2025 3 facts
claimThe LLM agent in the authors' framework utilizes an iterative reasoning mechanism, as referenced in Wei et al. (2022) and Luo et al. (2023), to perform diagnostic analysis for warehouse planning.
procedureThe LLM-based agent in the proposed framework employs an iterative reasoning mechanism that interprets natural language questions by generating sequential, interdependent sub-questions, where each sub-question is conditioned on the evidence from answers to previous ones.
referenceThe proposed framework for warehouse operational analysis consists of two main components: the ontological construction of a Knowledge Graph from Discrete Event Simulation output data, and an LLM-agent equipped with an iterative reasoning mechanism that features sequential sub-questioning, Cypher generation for Knowledge Graph interaction, and self-reflection.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv Mar 12, 2026 2 facts
claimPerformance gains in large language models are achieved not only by scaling data and model size during training, but also by increasing test-time computation, such as allowing the model to perform recurrent or iterative reasoning.
claimChain-of-thought (CoT) reasoning has significantly increased the expressive power of large language models, leading researchers to investigate how to implicitly incorporate iterative reasoning into a model's inductive bias.
KG-IRAG with Iterative Knowledge Retrieval - arXiv arxiv.org arXiv Mar 18, 2025 1 fact
claimKnowledge Graph-Based Iterative Retrieval-Augmented Generation (KG-IRAG) is a framework that integrates Knowledge Graphs with iterative reasoning to improve Large Language Models' ability to handle queries involving temporal and logical dependencies.
Grounding LLM Reasoning with Knowledge Graphs - arXiv arxiv.org arXiv Dec 4, 2025 1 fact
procedureThere are four primary methods for integrating Knowledge Graphs with Large Language Models: (1) learning graph representations, (2) using Graph Neural Network (GNN) retrievers to extract entities as text input, (3) generating code like SPARQL queries to retrieve information, and (4) using step-by-step interaction methods for iterative reasoning.
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org arXiv Mar 18, 2025 1 fact
procedureThe iterative reasoning process for temporal queries in KG-IRAG functions as follows: (1) after each retrieval of triplets, LLM2 evaluates if the current set of triplets combined with the reasoning prompt from LLM1 is sufficient to answer the query, (2) if LLM2 determines the data is insufficient, the system shifts to a different time or location to refine search parameters, and (3) this process repeats until LLM2 confirms the query can be resolved.