Relations (1)

related 2.32 — strongly supporting 4 facts

Large Language Models (LLMs) are related to knowledge representation through enhancements via Knowledge Graphs (KGs) and Retrieval-Augmented Generation (RAG), which improve LLMs' knowledge representation and reasoning [1]. This connection is further evidenced by systematic reviews of integrating KGs with LLMs across NLP, machine learning, and knowledge representation research [2], special tracks on their intersection [3], and dedicated papers like 'Knowledge representation and acquisition in the era of large language models' [4].

Facts (4)

Sources
Knowledge Graph Combined with Retrieval-Augmented Generation ... drpress.org Academic Journal of Science and Technology 1 fact
claimIntegrating Knowledge Graphs (KGs) with Retrieval-Augmented Generation (RAG) enhances the knowledge representation and reasoning abilities of Large Language Models (LLMs) by utilizing structured knowledge, which enables the generation of more accurate answers.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer 1 fact
accountThe authors conducted a systematic literature review of NLP, machine learning, and knowledge representation research from the last decade to understand approaches for integrating knowledge graphs (KGs) and large language models (LLMs).
Papers - Dr Vaishak Belle vaishakbelle.github.io 1 fact
referenceI. Mocanu and Vaishak Belle authored 'Knowledge representation and acquisition in the era of large language models: Reflections on learning to reason via PAC-Semantics', published in the Natural Language Processing Journal in 2023.
Call for Papers: KR meets Machine Learning and Explanation kr.org KR 1 fact
claimThe KR 2026 special track 'KR meets Machine Learning and Explanation' invites research on the intersection of Knowledge Representation and Machine Learning, specifically covering topics such as learning symbolic knowledge (ontologies, knowledge graphs, action theories), KR-driven plan computation, logic-based learning, neural-symbolic learning, statistical relational learning, symbolic reinforcement learning, and the mutual use of KR techniques and LLMs.