concept

LLM-based agent

Also known as: LLM agent, LLM-driven agents, LLM-empowered agents, LLM-based agents, LLM-powered agents, LLM-based agent, LLM agents

Facts (44)

Sources
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 12 facts
referenceZhang et al. (2024c) explored collaboration mechanisms for LLM agents through the lens of social psychology in their paper 'Exploring collaboration mechanisms for LLM agents: A social psychology view', published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.
claimCastricato et al. (2025) presented PERSONA, a dataset containing 1,586 synthetic personas for LLM agents.
referenceTharindu Kumarage, Cameron Johnson, Jadie Adams, Lin Ai, Matthias Kirchner, Anthony Hoogs, Joshua Garland, Julia Hirschberg, Arslan Basharat, and Huan Liu published 'Personalized attacks of social engineering in multi-turn conversations–llm agents for simulation and detection' as an arXiv preprint in 2025.
measurementWu et al. (2025a) released the RAIDEN Benchmark, which consists of 40,000 multi-turn dialogues for LLM agents.
claimKhan et al. (2024) utilized structured debates to improve the truthfulness of LLM agents.
referenceXuan Liu, Jie Zhang, Haoyang Shang, Song Guo, Chengxu Yang, and Quanyan Zhu authored 'Exploring prosocial irrationality for llm agents: A social cognition view', published as an arXiv preprint in 2024.
claimSclar et al. (2023) integrated belief tracking into LLM agents, while Wang et al. (2022) and Sclar et al. (2022) focused on coordination.
referenceThe paper 'AgentReview: Exploring peer review dynamics with LLM agents' by Yiqiao Jin, Qinlin Zhao, Yiyang Wang, Hao Chen, Kaijie Zhu, Yijia Xiao, and Jindong Wang was published in the Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 1208–1226, in Miami, Florida, USA, in November 2024.
claimWang et al. (2025) model opinion dynamics in LLM agents, while Chen et al. (2024b) evaluate social intelligence.
claimKumarage et al. (2025) simulated multi-turn social engineering attacks using LLM agents with varied personality traits to demonstrate how psychological profiles influence user vulnerability.
referenceXiaofei Dong, Xueqiang Zhang, Weixin Bu, Dan Zhang, and Feng Cao authored 'A survey of llm-based agents: Theories, technologies, applications and suggestions', published in the 2024 3rd International Conference on Artificial Intelligence, Internet of Things and Cloud Computing Technology (AIoTC) proceedings by IEEE.
claimTheory of Mind (ToM) enables LLM agents to grasp the mental states of other agents.
Leveraging Knowledge Graphs and LLM Reasoning to Identify ... arxiv.org arXiv Jul 23, 2025 12 facts
measurementThe performance of the LLM agent framework was measured using the pass@k metric, as defined by Chen et al. (2021), to assess answer accuracy across 4 attempts.
claimThe LLM-agent identified the 'AGV to FL' (Automated Guided Vehicle to Forklift) transfer as the key bottleneck for 'CamelCargo' by confirming an overall delay of 6,848 seconds compared to a 4,934-second average and performing sub-questioning to isolate the issue.
claimThe authors' framework employs LLM-based agents, as referenced in Guo et al. (2024) and Yao et al. (2022), to enable intuitive interaction with simulation data, as described by Xia et al. (2024), to aid warehouse planners.
procedureThe LLM agent's query processing procedure follows these steps: (1) The agent receives a complex natural language query regarding warehouse performance or planning. (2) The agent autonomously generates a sequence of sub-questions, formulated one at a time and conditioned on evidence from previous sub-question answers. (3) For each sub-question, the agent generates a precise NL-to-Graph Cypher query for Knowledge Graph interaction, as referenced in Hornsteiner et al. (2024) and Mandilara et al. (2025). (4) The agent retrieves relevant information. (5) The agent performs self-reflection, as referenced in Huang et al. (2022) and Madaan et al. (2023), to validate findings and correct errors in the analytical pathway.
procedureThe LLM agent calculates waiting times for AGVs by subtracting the worker pick-up end time from the AGV arrival time, and for forklifts by subtracting the AGV journey end time from the forklift placement start time.
procedureThe Summarizer module in the proposed LLM agent framework performs final answer synthesis by interpreting aggregated knowledge graph data to identify performance bottlenecks and suggest causal factors by traversing relationships within the graph.
claimThe LLM agent in the authors' framework utilizes an iterative reasoning mechanism, as referenced in Wei et al. (2022) and Luo et al. (2023), to perform diagnostic analysis for warehouse planning.
referenceSynergized LLMs + KGs involve a bidirectional integration, often featuring LLM-based agents that reason over, interact with, and manipulate Knowledge Graphs to perform complex, multi-step tasks, as described by Jiang et al. (2024) and Luo et al. (2023).
procedureThe LLM-based agent in the proposed framework employs an iterative reasoning mechanism that interprets natural language questions by generating sequential, interdependent sub-questions, where each sub-question is conditioned on the evidence from answers to previous ones.
procedureThe research framework aims to enable LLM-based agents to transform natural language questions about Discrete Event Simulation (DES) output into executable queries over a Knowledge Graph, iteratively refine analytical paths based on retrieved evidence, and synthesize information from disparate parts of the Knowledge Graph to diagnose operational issues.
claimThe authors of the paper propose a novel LLM-based agent that employs an iterative, self-correcting reasoning process over Knowledge Graphs derived from Discrete Event Simulation (DES) outputs to automate and enhance the identification and diagnosis of warehouse inefficiencies.
referenceThe proposed framework for warehouse operational analysis consists of two main components: the ontological construction of a Knowledge Graph from Discrete Event Simulation output data, and an LLM-agent equipped with an iterative reasoning mechanism that features sequential sub-questioning, Cypher generation for Knowledge Graph interaction, and self-reflection.
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 4 facts
claimLLM-based agents are better able to handle ambiguity and generate human-like responses compared to symbolic AI because the knowledge embedded in LLMs is more flexible.
claimLLM-based agents leverage vast amounts of corpus and self-supervised pre-training to infer patterns and relationships from raw text, rather than relying on explicit symbols and rules.
claimLLM-powered agents can process online data to respond to real-time changes and handle larger datasets more effectively than Knowledge Graphs.
claimThe Chain-of-Thought (CoT) method enhances the cognitive task performance of LLM-empowered agents by guiding the models to generate text about intermediate reasoning steps.
How to Improve Multi-Hop Reasoning With Knowledge Graphs and ... neo4j.com Neo4j Jun 18, 2025 2 facts
procedureAn LLM agent using a chain-of-thought flow to answer a question about the founders of Prosper Robotics follows this procedure: (1) separates the query into sub-questions ('Who is the founder of Prosper Robotics?' and 'What’s the latest news about the founder?'), (2) queries a knowledge graph to identify the founder as Shariq Hashme, and (3) rewrites the second question to 'What’s the latest news about Shariq Hashme?' to retrieve the final answer.
claimLLM agents utilize chain-of-thought flows to separate complex questions into multiple steps, define a plan, and query tools such as APIs or knowledge bases to generate answers.
Bridging the Gap Between LLMs and Evolving Medical Knowledge arxiv.org arXiv Jun 29, 2025 2 facts
procedureThe authors of 'Bridging the Gap Between LLMs and Evolving Medical Knowledge' developed an autonomous search and graph-building process powered by specialized LLM agents that continuously generate and refine Medical Knowledge Graphs (MKGs) through integrated workflows using search engines and medical textbooks.
procedureThe AMG-RAG framework utilizes LLM-driven agents assisted by domain-specific search tools to generate graph entities enriched with metadata, confidence scores, and relevance indicators.
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org arXiv Oct 23, 2025 2 facts
claimLLM-powered agents require persistent, structured memory to overcome the limitations of finite context windows.
referenceWujiang Xu, Zujie Liang, Kai Mei, Hang Gao, Juntao Tan, and Yongfeng Zhang proposed A-MEM, an agentic memory system for LLM agents, in their 2025 arXiv preprint.
How to combine LLMs and Knowledge Graphs for enterprise AI linkedin.com Tony Seale · LinkedIn Nov 14, 2025 2 facts
claimTony Seale defines the 'Neural-Symbolic Loop' as a pattern where LLM-based agents are combined with Knowledge Graphs to structure, connect, and reason over enterprise data.
claimThe Neural Symbolic Loop is a pattern used to stabilize reasoning in LLM-based agents that lack internal coherence by providing external structure.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv Jul 11, 2024 2 facts
claimLLM-empowered agents (LAAs) demonstrate unique advantages over Knowledge Graphs (KGs) by analogizing human reasoning with agentic workflows and various prompting techniques, scaling effectively on large datasets, adapting to in-context samples, and leveraging the emergent abilities of Large Language Models.
claimPromising future directions for neuro-symbolic AI include neuro-vector-symbolic architectures, which incorporate vector manipulation to enhance agentic reasoning capabilities, and generative encoding, which embeds agentic logical steps into text vectorization for advanced sample selection for in-context learning of LLM-empowered agents.
Reducing hallucinations in large language models with custom ... aws.amazon.com Amazon Web Services Nov 26, 2024 1 fact
accountShayan Ray is an Applied Scientist at Amazon Web Services whose research focuses on natural language processing, natural language understanding, natural language generation, conversational AI, task-oriented dialogue systems, and LLM-based agents.
LLM Knowledge Graph: Merging AI with Structured Data - PuppyGraph puppygraph.com PuppyGraph Feb 19, 2026 1 fact
claimLLM knowledge graphs enable reliable vertical agents by providing the necessary relationship-rich enterprise context, dynamic data updates, and high accuracy required by specialized, domain-specific LLM agents.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 1 fact
claimPoG (Chen et al., 2024a) integrates reflection and self-correction mechanisms to adaptively explore reasoning paths over a knowledge graph via an LLM agent, augmenting the LLM in complex reasoning and question answering.
Enterprise AI Requires the Fusion of LLM and Knowledge Graph stardog.com Stardog Dec 4, 2024 1 fact
claimKnowledge Graphs are a dominant design pattern for enabling Retrieval-Augmented Generation (RAG) and LLM agents to deliver value quickly with strategic relevance.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com Aritra Biswas, Noé Vernier · Datadog Aug 25, 2025 1 fact
claimPrompt engineering for LLM agents involves defining a logical flow across multiple LLM calls and using format restrictions like structured output to create guidelines for LLM output.
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com GitHub 1 fact
referenceThe paper "LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions" by Lin et al. (2025) surveys the taxonomy, methods, and future directions regarding hallucinations in LLM-based agents.