claim
Current LLM+KG systems face a bottleneck in amortized reasoning because retrieval and prompting pipelines repeatedly query the Knowledge Graph for every Beam search or Chain-of-Thought (CoT) step, leading to quadratic computational growth.

Authors

Sources

Referenced by nodes (2)