Relations (1)
related 2.32 — strongly supporting 4 facts
Large Language Models are related to memory because traditional monitoring tools track memory usage as a system metric [1], while the models themselves lack inherent memory for complex planning [2]. Furthermore, memory in this context is defined by specific technical mechanisms like context windows [3] and is empirically measured through various research studies [4].
Facts (4)
Sources
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org 2 facts
claimIn psychology, memory entails structured encoding and recall, whereas in LLMs, memory typically refers to context windows or parameters.
measurementMemory in Large Language Models is measured by Li et al. (2023) regarding parametric knowledge, by Zhang et al. (2024a) using n-back tasks, and by Timkey & Linzen (2023) regarding capacity.
LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com 1 fact
claimTraditional application performance monitoring tools are insufficient for LLMs because they focus on system metrics like CPU, memory, and HTTP errors, whereas LLM issues often involve the content of responses, such as factual accuracy or tone.
Building Better Agentic Systems with Neuro-Symbolic AI cutter.com 1 fact
claimLarge Language Models (LLMs) struggle with multistep planning because they generate text one token at a time without a built-in memory of the overall plan, leading to logical errors or the loss of the thread in complex sequences.