Relations (1)
related 0.80 — strongly supporting 8 facts
Large Language Models are linked to the concept of understanding through ongoing academic debates regarding whether these models possess genuine cognitive comprehension {fact:2, 3, 8} or emergent human-like capacities [1]. Furthermore, research explores how these models can be improved through structured knowledge to enhance their reasoning and understanding [2], as documented in various scholarly publications {fact:5, 6, 7}.
Facts (8)
Sources
Understanding LLM Understanding skywritingspress.ca 6 facts
referenceHolger Lyre authored the paper '“Understanding AI”: Semantic Grounding in Large Language Models', published as an arXiv preprint in 2024.
claimThe question of what constitutes "understanding" has gained urgency due to recent capability leaps in generative artificial intelligence, specifically large language models.
referenceAguera y Arcas, B. (2022) published 'Do large language models understand us?' on Medium.
claimIt is difficult to determine if large language models possess an underlying notion of understanding based solely on observing their behavior.
perspectiveSome researchers argue that reasoning, understanding, and other human-like capacities may be emergent properties of large language models.
referencevan Dijk, B. M. A., Kouwenhoven, T., Spruit, M. R., & van Duijn, M. J. (2023) published 'Large Language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding' in arXiv (arXiv:2310.19671).
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com 1 fact
claimIn a synergized framework, Large Language Models use structured knowledge from Knowledge Graphs to improve reasoning and understanding, while Knowledge Graphs utilize the language production and contextual capabilities of Large Language Models.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org 1 fact
referenceMitchell and Krakauer's 2023 paper 'The debate over understanding in ai’s large language models' addresses the controversy surrounding whether large language models truly 'understand' information.