claim
Knowledge graph-enhanced large language models often incur high computational overhead due to the necessity of graph traversal, entity linking, and dynamic retrieval during inference, which introduces latency that hinders deployment in real-time applications like dialogue systems, autonomous agents, and online recommendation.

Authors

Sources

Referenced by nodes (4)