deep neural networks
Also known as: deep neural models, Deep networks
Facts (26)
Sources
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Dec 9, 2025 8 facts
referenceHu et al. [28] introduce logic rules into the training of deep neural networks via posterior regularization, improving generalization while enforcing structured consistency.
referenceHu, Ma, Liu, Hovy, and Xing (2016) describe a method for harnessing deep neural networks using logic rules.
referenceDeep Generative Models (DGMs) are probabilistic frameworks that use deep neural networks to approximate the underlying distribution of high-dimensional data, denoted as p_data(x).
claimTraditional deep neural networks typically generalize by interpolating between training examples and struggle when faced with novel combinations or configurations of inputs not seen during training.
referenceJ. Gawlikowski, C.R.N. Tassi, M. Ali, J. Lee, M. Humt, J. Feng, A. Kruspe, R. Triebel, P. Jung, and R. Roscher published 'A survey of uncertainty in deep neural networks' in the journal Artificial Intelligence Review in 2023.
claimExisting work on optimizing deep neural networks using MapReduce for parallelism and efficiency provides a foundation for embedding symbolic reasoning layers atop high-throughput pipelines.
claimLogic Neural Networks (LNNs) trained on structured clinical ontologies outperform traditional deep networks in differential diagnosis tasks, providing both improved accuracy and clause-level interpretability that aligns with FDA transparency mandates for medical AI.
referenceKatz, Barrett, Dill, Julian, and Kochenderfer (2017) introduce Reluplex, an efficient SMT solver designed for verifying deep neural networks.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org Nov 7, 2024 5 facts
claimHooshyar et al. (2023) argue that augmenting deep neural networks with symbolic knowledge can contribute to the development of trustworthy and interpretable AI systems for education.
referenceMarwin HS Segler, Mike Preuss, and Mark P. Waller demonstrated a method for planning chemical syntheses using a combination of deep neural networks and symbolic AI, published in Nature in 2018.
referenceXuan Xie, Kristian Kersting, and Daniel Neider proposed a method for the neuro-symbolic verification of deep neural networks in 2022.
referenceR. Saravanakumar, N. Krishnaraj, S. Venkatraman, B. Sivakumar, S. Prasanna, and K. Shankar developed a fault diagnosis model for rotating machinery that combines hierarchical symbolic analysis, particle swarm optimization, and deep neural networks, published in the journal Measurement in 2021.
referenceGopinath et al. (2018) developed a technique for applying symbolic execution to deep neural networks.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org Jul 11, 2024 3 facts
claimConnectionist models, specifically deep neural networks, identify patterns in pixel data to perform image recognition tasks.
claimLLM-enhanced autonomous agents utilize deep neural networks for processing while employing symbolic AI principles to guide task decomposition and planning by breaking tasks into discrete, logical steps.
claimThe fusion of symbolic structures and deep neural networks creates a synergy that boosts the capabilities of LLM-enhanced autonomous agents.
Track: Poster Session 3 - aistats 2026 virtual.aistats.org 2 facts
claimThe Local Learning Coefficient (LLC) is a complexity measure for deep neural networks that leverages Singular Learning Theory (SLT) to account for singularities in the loss landscape geometry.
claimDeep networks exhibit a phenomenon called 'agreement-on-the-line' under distribution shifts, where in-distribution versus out-of-distribution accuracy is strongly linearly correlated across architectures and hyperparameters, and the agreement between predictions of independently trained networks follows the same linear trend.
The Evidence for AI Consciousness, Today - AI Frontiers ai-frontiers.org Dec 8, 2025 1 fact
claimDeep neural networks satisfy the HOT-4 indicator of the Butlin et al. framework, which requires smooth representation spaces.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org Feb 16, 2025 1 fact
referenceDavid Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. authored 'Mastering the game of go with deep neural networks and tree search,' published in Nature in 2016.
Knowledge Graphs: Opportunities and Challenges - Springer Nature link.springer.com Apr 3, 2023 1 fact
procedureWang et al. (2018d) proposed a graph reasoning model to recognize social relationships of people in images posted on social media by enforcing a function based on a social knowledge graph and deep neural networks.
Construction of Knowledge Graphs: State and Challenges - arXiv arxiv.org 1 fact
claimDeep Neural Networks are popular for Named Entity Recognition (NER) because they require less human interaction compared to Conditional Random Fields (CRF), which require extensive feature engineering.
Neuro-Symbolic AI: Explainability, Challenges & Future Trends linkedin.com Dec 15, 2025 1 fact
perspectiveThe author of the LinkedIn post posits that the emergence of behavior in deep neural networks after passing a structural depth threshold resonates with condition-based realization architectures, suggesting that AI behavior emerges when structurally defined conditions are satisfied rather than merely through accumulated computation.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 1 fact
referenceThe research paper 'Axiomatic attribution for deep networks' presents a method for axiomatic attribution in deep neural networks.
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org 1 fact
claimConnectionist models driven by deep neural networks identify subtle patterns in pixel data, similar to how human brains recognize faces in a crowd.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org 1 fact
claimThe integration of symbolic logic from knowledge graphs with deep neural networks in large language models creates hybrid models where decisions emerge from entangled attention weights and vector operations, making reasoning paths difficult to trace.