concept

Grounding LLM Reasoning with Knowledge Graphs

Facts (21)

Sources
Grounding LLM Reasoning with Knowledge Graphs - arXiv arxiv.org arXiv Dec 4, 2025 21 facts
procedureThe framework proposed in 'Grounding LLM Reasoning with Knowledge Graphs' integrates LLM reasoning with Knowledge Graphs by linking each step of the reasoning process to graph-structured data, which grounds intermediate thoughts into interpretable traces.
measurementIn the 'Grounding LLM Reasoning with Knowledge Graphs' study, the Llama 3.1 405B-Ins model using the Graph Explore Agent method achieved a score of 41.67 on the Biology dataset.
measurementIn the 'Grounding LLM Reasoning with Knowledge Graphs' study, the Llama 3.1 405B-Ins model using the Graph-RAG method achieved a score of 42.86 on the Biology dataset.
claimThe authors of 'Grounding LLM Reasoning with Knowledge Graphs' used Llama 3.1 (Instruct) models in 8B, 70B, and 405B versions as the backend for their experiments, with the 405B model utilizing the FP8 variant.
claimThe framework proposed in 'Grounding LLM Reasoning with Knowledge Graphs' incorporates multiple reasoning strategies, specifically Chain-of-Thought (CoT), Tree-of-Thought (ToT), and Graph-of-Thought (GoT).
claimThe baseline configuration for the experiments in 'Grounding LLM Reasoning with Knowledge Graphs' used Mpnet-v2 as the retriever and FAISS for indexing.
claimThe method presented in 'Grounding LLM Reasoning with Knowledge Graphs' enhances performance on domain-specific question answering over knowledge graphs by progressively conditioning LLM reasoning at each step and structuring reasoning into incremental steps that interact with graph data.
claimThe gene KRT39 is expressed in the skin of the body, as identified by the knowledge graph extraction process in the paper 'Grounding LLM Reasoning with Knowledge Graphs'.
referenceThe experimental results in 'Grounding LLM Reasoning with Knowledge Graphs' compare the performance of various methods—including Baselines, Text-RAG, Graph-RAG, Graph CoT, Graph Explore, and Graph ToT—across multiple domains including Healthcare, Goodreads, Biology, Chemistry, Materials Science, Medicine, and Physics using Llama 3.1 models.
measurementIn the 'Grounding LLM Reasoning with Knowledge Graphs' study, the Llama 3.1 405B-Ins model using the Graph ToT Select method achieved a score of 68.81 on the Medicine dataset.
claimThe experiments in 'Grounding LLM Reasoning with Knowledge Graphs' were conducted using NVIDIA TITAN RTX or NVIDIA A100 GPUs, Python 3.8, and the vLLM library for model deployment.
referenceThe evaluation methodology in 'Grounding LLM Reasoning with Knowledge Graphs' follows the GRBench paper's graphCoT approach.
claimThe agentic method for interacting with knowledge graphs outperformed graph exploration approaches across most datasets and reasoning strategies in the experimental results presented in the paper 'Grounding LLM Reasoning with Knowledge Graphs'.
claimThe gene KRT39 is expressed in the head anatomy, as identified by the knowledge graph extraction process in the paper 'Grounding LLM Reasoning with Knowledge Graphs'.
measurementThe framework proposed in 'Grounding LLM Reasoning with Knowledge Graphs' achieved state-of-the-art performance on GRBench, a benchmark for domain-specific graph reasoning, with at least a 26.5% improvement over Chain-of-Thought (CoT) baselines.
claimIn the framework presented in 'Grounding LLM Reasoning with Knowledge Graphs', graph exploration strategies perform better with fewer steps, whereas agentic methods improve as the number of steps increases.
measurementIn the 'Grounding LLM Reasoning with Knowledge Graphs' study, the Llama 3.1 405B-Ins model using the Graph ToT (Tree of Thoughts) Agent method achieved a score of 71.67 on the Goodreads dataset.
measurementIn the 'Grounding LLM Reasoning with Knowledge Graphs' study, the Llama 3.1 70B-Ins model using the Graph ToT Agent method achieved a score of 65.48 on the Goodreads dataset.
measurementIn the 'Grounding LLM Reasoning with Knowledge Graphs' study, the Llama 3.1 405B-Ins model using the Graph ToT Select method achieved a score of 72.86 on the Biology dataset.
procedureThe method in 'Grounding LLM Reasoning with Knowledge Graphs' combines reasoning strategies (Chain-of-Thought, Tree-of-Thought, Graph-of-Thought) with two graph interaction methods: an agent to navigate the graph, and an automatic graph exploration mechanism based on generated text.
accountThe system described in 'Grounding LLM Reasoning with Knowledge Graphs' performs a 'Finish' action on the entities 'head' and 'skin of body' after processing KRT39 expression triples.