concept

Large Language Model Agents

Also known as: LMAs, Large Language Model-driven agents, large language model based agents, Large language model-empowered agents

Facts (29)

Sources
KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org arXiv May 20, 2024 18 facts
claimThe KG-RAG pipeline integrates Knowledge Graphs as external knowledge modules for Language Model Agents to address information hallucination through dynamically updated graphs and granular, context-sensitive retrieval processes.
claimThe KG-RAG pipeline reduces the propensity for Large Language Model Agents to generate hallucinated content, thereby enhancing the reliability and factual accuracy of their responses.
claimTransitioning from unstructured dense text representations to dynamic, structured knowledge representation via knowledge graphs can significantly reduce the occurrence of hallucinations in Language Model Agents by ensuring they rely on explicit information rather than implicit knowledge stored in model weights.
claimResearch enabling interaction with external tools has improved the action capabilities of Large Language Model Agents (LMAs).
claimThe KG-RAG pipeline integrates Knowledge Graphs as external knowledge modules for Language Model Agents to address information hallucination through dynamically updated graphs and granular, context-sensitive retrieval processes.
perspectiveThe integration of structured knowledge into the operational framework of Language Model Agents (LMAs) via knowledge graphs represents a significant paradigm shift in how these agents store and manage information.
claimIntegrating Language Model Agents (LMAs) with external databases has advanced knowledge retrieval and memory recall capabilities.
procedureThe KG-RAG pipeline extracts triples from raw text, stores them in a Knowledge Graph database, and allows searching for complex information to augment Language Model Agents with external, robust, and faithful knowledge storage.
perspectiveThe integration of structured knowledge into the operational framework of Language Model Agents (LMAs) via knowledge graphs represents a significant paradigm shift in how these agents store and manage information.
claimKnowledge Graphs enable Language Model Agents to access vast volumes of accurate and updated information without requiring resource-intensive fine-tuning.
claimRetrieval-Augmented Generation (RAG) augments Large Language Model Agents (LMAs) by dynamically injecting specific information into prompts at inference time without modifying the model’s weights.
claimTransitioning from unstructured dense text representations to dynamic, structured knowledge representation via knowledge graphs can significantly reduce the occurrence of hallucinations in Language Model Agents by ensuring they rely on explicit information rather than implicit knowledge stored in model weights.
claimThe KG-RAG pipeline reduces the propensity for Large Language Model Agents to generate hallucinated content, thereby enhancing the reliability and factual accuracy of their responses.
claimRetrieval-Augmented Generation (RAG) augments Large Language Model Agents (LMAs) by dynamically injecting specific information into prompts at inference time without modifying the model’s weights.
claimThe 'extended mind' theory, as referenced by Clark (2010), is relevant to Large Language Model Agents (LMAs) because these agents can use external cognitive extensions to offload memory and manage complex tasks, similar to how humans use tools like smartphones and notebooks.
claimThe 'extended mind' theory, as referenced by Clark (2010), is relevant to Large Language Model Agents (LMAs) because these agents can use external cognitive extensions to offload memory and manage complex tasks, similar to how humans use tools like smartphones and notebooks.
claimKnowledge Graphs enable Language Model Agents to access vast volumes of accurate and updated information without requiring resource-intensive fine-tuning.
claimResearch enabling interaction with external tools has improved the action capabilities of Large Language Model Agents (LMAs).
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv Jul 11, 2024 3 facts
claimLarge language model-empowered agents handle ambiguity and generate more human-like responses by utilizing flexible, context-driven reasoning embedded in model weights, which contrasts with the rigidity of symbolic AI.
referenceZhiheng Xi et al. published 'The rise and potential of large language model based agents: A survey' as an arXiv preprint (arXiv:2309.07864) in 2023.
claimFew-shot in-context learning (ICL) allows large language model-empowered agents to solve problems by utilizing provided examples within a prompt, thereby avoiding the need for explicit re-training of the model.
KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org arXiv May 20, 2024 3 facts
claimLarge Language Model Agents (LMAs) face significant challenges in maintaining factual accuracy while preserving creative capabilities, including information hallucinations, catastrophic forgetting, and limitations in processing long contexts during knowledge-intensive tasks.
claimPreliminary experiments using the ComplexWebQuestions dataset indicate that the KG-RAG pipeline reduces hallucinated content in Large Language Model Agents.
referenceThe KG-RAG (Knowledge Graph-Retrieval Augmented Generation) pipeline is a framework designed to enhance the knowledge capabilities of Large Language Model Agents by integrating structured Knowledge Graphs with Large Language Model functionalities, thereby reducing reliance on the latent knowledge of the models.
A Comprehensive Benchmark and Evaluation Framework for Multi ... arxiv.org arXiv Jan 6, 2026 2 facts
referenceThe paper 'Large language model agents for biomedicine: A comprehensive review of methods, evaluations, challenges, and future directions' by Xiaoran Xu and Ravi Sankar provides a review of LLM agents in the biomedical field, published in Information in 2025.
referenceJunfeng Lu and Yueyan Li proposed a dynamic affective memory management system for personalized large language model agents in 2025.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com GitHub 1 fact
referenceEICopilot is a system designed to search and explore enterprise information over large-scale knowledge graphs using Large Language Model-driven agents (arXiv, 2025).
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 1 fact
claimYang et al. (2024) proposed 'PsychoGAT', a psychological measurement paradigm that utilizes interactive fiction games with Large Language Model agents, presented at the 62nd Annual Meeting of the Association for Computational Linguistics.
Leveraging Knowledge Graphs and LLM Reasoning to Identify ... arxiv.org arXiv Jul 23, 2025 1 fact
claimThe authors present the first application combining Knowledge Graphs and Large Language Model agents to analyze output data from Discrete Event Simulations of warehouse operations specifically to identify bottlenecks and inefficiencies.