grounding
Facts (16)
Sources
Understanding LLM Understanding skywritingspress.ca Jun 14, 2024 3 facts
perspectiveBergen argues that while large language models are impressive, they require grounding to adequately explain human cognition.
referencePulvermüller authored the work titled 'Constraining networks biologically to explain grounding,' which proposes using biological constraints on networks to explain the concept of grounding.
referenceMollo authored the work titled 'Grounding in Large Language Models: Functional Ontologies for AI,' which explores the concept of grounding within the context of large language models.
Survey and analysis of hallucinations in large language models frontiersin.org Sep 29, 2025 2 facts
claimRetrieval-augmented generation (RAG) integrates external knowledge for grounding in large language models and has high feasibility via free toolkits.
claimRetrieval-augmented generation (RAG) improves grounding and reduces reliance on model memorization by incorporating external knowledge retrieval at inference time, as established by Lewis et al. (2020).
Panpsychism (Stanford Encyclopedia of Philosophy/Fall 2025 Edition) plato.stanford.edu May 23, 2001 2 facts
claimIt is generally assumed in philosophy that for a fact X to ground a fact Y, it must be the case that X necessitates Y.
claimPhilosophers use the term 'grounding' to describe a non-causal explanatory relationship where one set of facts wholly consists in another set of facts, such as the relationship between the existence of a party and the specific activities of the people attending it.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 2 facts
claimGroundedness serves as the foundation for both explainability and safety in AI systems, as a lack of grounding in provided instructions can lead to unintended consequences.
claimThe National Science Foundation (NSF) identifies grounding, instructability, and alignment as the three fundamental attributes of ensuring AI safety.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com Sep 1, 2025 2 facts
claimA lack of grounding causes large language models to hallucinate because, without external data sources, models rely solely on learned knowledge and may fabricate content when asked about obscure or domain-specific topics.
claimLarge language model hallucinations occur due to gaps in training data, a lack of grounding, or limitations in how models understand real-world facts.
The Effects of Attachment and Trauma on Parenting and Children's ... rsisinternational.org Aug 16, 2025 1 fact
procedureMindfulness techniques, including breath meditation, body scans, and grounding, enable parents to be more tolerant of painful feelings and more responsive and less reactive toward their children.
Hybrid Fact-Checking that Integrates Knowledge Graphs, Large ... aclanthology.org 1 fact
claimLarge language models excel at generating fluent text but often lack reliable grounding in verified information, while knowledge-graph-based fact-checkers provide precise and interpretable evidence but are limited by coverage and latency.
Symbols and grounding in large language models - PMC pmc.ncbi.nlm.nih.gov 1 fact
claimEllie Pavlick argues that large language models can serve as plausible models of human language, providing counterarguments to two commonly cited reasons why they cannot: their lack of symbolic representations and their lack of grounding.
Hybrid Fact-Checking that Integrates Knowledge Graphs, Large ... arxiv.org Nov 5, 2025 1 fact
claimLarge language models excel in generating fluent text but often lack reliable grounding in verified information.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com 1 fact
referenceThe paper 'Can Knowledge Graphs Make Large Language Models More Trustworthy?' is a research work focused on the integration of knowledge graphs with LLMs for fact-checking and grounding.