Relations (1)
related 2.58 — strongly supporting 5 facts
Large Language Models are related to external knowledge bases as they are frequently integrated into architectures like MRKL systems {fact:3, fact:4} to improve logical coherence [1] and mitigate hallucinations [2]. Furthermore, Large Language Models utilize attention mechanisms to evaluate the relevance of information retrieved from these external knowledge sources [3].
Facts (5)
Sources
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com 1 fact
referenceThe paper "Bridging External and Parametric Knowledge: Mitigating Hallucination of LLMs with Shared-Private Semantic Synergy in Dual-Stream Knowledge" by Sui et al. (2025) proposes a method to mitigate hallucinations in large language models by bridging external and parametric knowledge using shared-private semantic synergy.
Empowering GraphRAG with Knowledge Filtering and Integration arxiv.org 1 fact
claimLarge language models can use attention scores as a natural indicator of the relevance and significance of retrieved external knowledge, as supported by Yang et al. (2024) and Ben-Artzy and Schwartz (2024).
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 1 fact
claimIncorporating external knowledge into an ensemble of Large Language Models (LLMs) aims to improve logical coherence by ensuring generated content aligns with established facts and relationships in external knowledge sources.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org 1 fact
referenceEhud Karpas et al. proposed MRKL systems, a modular, neuro-symbolic architecture that integrates large language models with external knowledge sources and discrete reasoning capabilities.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org 1 fact
referenceEhud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et al. authored 'MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning', published as an arXiv preprint (arXiv:2205.00445) in 2022.