Relations (1)

related 0.30 — supporting 3 facts

Large Language Models and symbolic logic are related through neuro-symbolic AI that integrates symbolic logic from knowledge graphs with deep neural networks in LLMs to create hybrid models [1], redefining program synthesis by merging LLM fluency with symbolic rigor [2], even as LLMs do not inherently operate via symbolic logic but retain cognitive metaphors [3].

Facts (3)

Sources
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org arXiv 1 fact
claimDespite modern Large Language Models (LLMs) not operating through symbolic logic, the metaphors of cognition have persisted and intensified with the rise of deep learning, with traces of the 'mind-as-machine' metaphor surviving in recent neural approaches.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer 1 fact
claimNeuro-symbolic AI redefines program synthesis and verification by merging the generative fluency of large language models with the rigor of symbolic logic.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 1 fact
claimThe integration of symbolic logic from knowledge graphs with deep neural networks in large language models creates hybrid models where decisions emerge from entangled attention weights and vector operations, making reasoning paths difficult to trace.