Relations (1)

related 1.58 — strongly supporting 2 facts

Large Language Models are related to perception because they are trained on architectures designed to support agent abilities, which explicitly include perception as a core function, as stated in [1] and [2].

Facts (2)

Sources
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 1 fact
claimLarge Language Models are trained on large-scale transformers comprising billions of learnable parameters to support abilities including perception, reasoning, planning, and action.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv 1 fact
claimLarge Language Models (LLMs) are trained on large-scale transformers comprising billions of learnable parameters to support agent abilities such as perception, reasoning, planning, and action.