Relations (1)

related 2.32 — strongly supporting 4 facts

Large language models are related to knowledge gaps because these gaps are identified as a primary cause of model hallucinations {fact:2, fact:3} and persistent limitations in base models [1], while knowledge graphs are utilized to help these models track and fill such gaps [2].

Facts (4)

Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
claimInadequate training data coverage creates knowledge gaps that cause large language models to hallucinate when addressing unfamiliar medical topics, according to Lee et al. (2024).
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 1 fact
claimKnowledge Tracing empowered by knowledge graphs allows large language models (LLMs) to track knowledge evolution, fill in knowledge gaps, and improve the accuracy of responses.
LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org llmmodels.org 1 fact
claimLarge language models can hallucinate due to knowledge gaps and context issues, as they may not always understand the context in which text is being used despite processing vast amounts of data.
Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com M. Brenndoerfer · mbrenndoerfer.com 1 fact
claimFinetuning large language models modifies the model's response style regarding expressed confidence, but the underlying knowledge gaps and exposure bias patterns remain encoded in the base model from pretraining.