claim
Current LLM+KG systems face a bottleneck in amortized reasoning because retrieval and prompting pipelines repeatedly query the Knowledge Graph for every Beam search or Chain-of-Thought (CoT) step, leading to quadratic computational growth.
Authors
Sources
- Large Language Models Meet Knowledge Graphs for Question ... arxiv.org via serper
Referenced by nodes (2)
- Knowledge Graph concept
- chain-of-thought concept