Relations (1)
related 3.17 — strongly supporting 8 facts
Large language models are frequently discussed in relation to grounding because they often lack it, leading to hallucinations as noted in [1], [2], and [3]. Furthermore, research and methodologies such as retrieval-augmented generation and knowledge graph integration are specifically designed to provide this necessary grounding for large language models, as evidenced by [4], [5], and [6].
Facts (8)
Sources
Understanding LLM Understanding skywritingspress.ca 2 facts
perspectiveBergen argues that while large language models are impressive, they require grounding to adequately explain human cognition.
referenceMollo authored the work titled 'Grounding in Large Language Models: Functional Ontologies for AI,' which explores the concept of grounding within the context of large language models.
Survey and analysis of hallucinations in large language models frontiersin.org 1 fact
claimRetrieval-augmented generation (RAG) integrates external knowledge for grounding in large language models and has high feasibility via free toolkits.
Hybrid Fact-Checking that Integrates Knowledge Graphs, Large ... aclanthology.org 1 fact
claimLarge language models excel at generating fluent text but often lack reliable grounding in verified information, while knowledge-graph-based fact-checkers provide precise and interpretable evidence but are limited by coverage and latency.
Symbols and grounding in large language models - PMC pmc.ncbi.nlm.nih.gov 1 fact
claimEllie Pavlick argues that large language models can serve as plausible models of human language, providing counterarguments to two commonly cited reasons why they cannot: their lack of symbolic representations and their lack of grounding.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com 1 fact
claimA lack of grounding causes large language models to hallucinate because, without external data sources, models rely solely on learned knowledge and may fabricate content when asked about obscure or domain-specific topics.
Hybrid Fact-Checking that Integrates Knowledge Graphs, Large ... arxiv.org 1 fact
claimLarge language models excel in generating fluent text but often lack reliable grounding in verified information.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com 1 fact
referenceThe paper 'Can Knowledge Graphs Make Large Language Models More Trustworthy?' is a research work focused on the integration of knowledge graphs with LLMs for fact-checking and grounding.