Relations (1)

related 2.32 — strongly supporting 4 facts

Large Language Models are designed to support agent abilities such as planning [1], [2], though they currently face challenges with complex long-term planning tasks [3]. Consequently, specialized benchmarks like Planbench have been developed to evaluate these specific planning capabilities in Large Language Models [4].

Facts (4)

Sources
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 1 fact
claimLarge Language Models are trained on large-scale transformers comprising billions of learnable parameters to support abilities including perception, reasoning, planning, and action.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv 1 fact
claimLarge Language Models (LLMs) are trained on large-scale transformers comprising billions of learnable parameters to support agent abilities such as perception, reasoning, planning, and action.
Building Better Agentic Systems with Neuro-Symbolic AI cutter.com Cutter Consortium 1 fact
claimLarge language models (LLMs) struggle with tasks that require strict logic, long-term planning, or adherence to hard rules such as laws, legal codes, or physics.
Combining large language models with enterprise knowledge graphs frontiersin.org Frontiers 1 fact
referenceThe paper 'Planbench: an extensible benchmark for evaluating large language models on planning and reasoning about change' by Valmeekam et al. (2024) presents a benchmark designed to evaluate the planning and reasoning capabilities of large language models.