Relations (1)
related 0.20 — supporting 2 facts
Large Language Models and human cognition are related through academic discourse that debates whether the former can serve as a model for the latter, as seen in Bergen's argument regarding grounding [1] and the critical perspective that LLMs should be evaluated on their own merits rather than their resemblance to human cognitive processes [2].
Facts (2)
Sources
Understanding LLM Understanding skywritingspress.ca 1 fact
perspectiveBergen argues that while large language models are impressive, they require grounding to adequately explain human cognition.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org 1 fact
perspectiveLarge Language Models (LLMs) should be evaluated as producers of polysemic signals, intertextual echoes, and semiotically rich fragments rather than by their resemblance to human cognition.