Explanation-based fine-tuning improves neural network robustness against spurious correlations by encouraging models to focus on human-relevant features during training.
The 'Symbolic[Neuro]' architecture transforms symbolic inputs into vector representations, processes them entirely through neural networks, and re-symbolizes the output.
Neuro-symbolic programming allows users to write high-level programs that utilize neural networks as subroutines for perception tasks, enabling the resulting system to perform probabilistic inference or planning.
In transportation, neuro-symbolic AI enhances travel demand prediction by combining interpretable decision tree–based symbolic rules with neural network learning, allowing models to capture complex geospatial and socioeconomic patterns with improved accuracy and transparency.
Deep discriminative models estimate the conditional distribution p(y|x) by learning a mapping through a neural network to output predictive distributions directly.
Symbolic constraints can regularize neural network learning processes, preventing convergence to solutions that violate established domain knowledge or rely on spurious correlations in training data, as noted in citation 74.
Fenske et al. employed a neuro-symbolic framework for medical diagnosis by integrating neural networks with a Bayesian network structured over diseases and symptoms, where neural models interpret raw inputs to produce probabilistic symptom likelihoods that are propagated through the Bayesian network to infer posterior probabilities of diagnoses.
The goal of neuro-symbolic AI is to unify neural networks and symbolic AI to combine the inductive learning capacity of neural networks—which excels at discovering latent patterns from unstructured or noisy data—with the explicit knowledge representations of symbolic AI, which enable interpretability, rule-based reasoning, and systematic extension to new tasks.
Kabaha and Cohen (2024) discuss the verification of global robustness in neural networks.
The Scallop framework integrates with PyTorch to allow neural networks to feed probabilistic facts into logic programs and receive gradients from logical reasoning outcomes, enabling training with feedback from logical constraints.
Mohapatra, Weng, Chen, Liu, and Daniel (2020) propose a method for verifying the robustness of neural networks against a family of semantic perturbations.
Neuro-symbolic AI combines the learning capabilities of neural networks with the logical rigor and transparency of symbolic reasoning to address robustness, uncertainty quantification, and intervenability in AI systems.
A primary driver for the integration of neural and symbolic AI is the quest for explainability, as neural networks are often criticized as 'black boxes' with internal decision processes that are difficult to interpret and debug, whereas symbolic representations allow for explicit explanations and traceable decision paths.
Huang, J., Li, Z., Chen, B., Samel, K., Naik, M., Song, L., and Si, X. created Scallop, a system for scalable differentiable reasoning that bridges probabilistic deductive databases and neural networks, published in the 2021 Advances in Neural Information Processing Systems.
Neuro-symbolic systems aim to harness the efficiency and scalability of neural networks while preserving the transparency and verifiability inherent in symbolic reasoning.
In program synthesis, the fusion of neural networks with symbolic reasoning enables models to generate and optimize code for tasks ranging from sorting algorithms to complex logic programming challenges.
Acharya, K., Lad, M., Sun, L., and Song, H. (2025) developed a neurosymbolic approach for travel demand prediction that integrates decision tree rules into neural networks, published in the 2025 International Wireless Communications and Mobile Computing (IWCMC) proceedings.
Parametric deep discriminative models assume a specific form for the output distribution (such as Gaussian, Laplace, or Bernoulli) and train the neural network to estimate the parameters of that distribution.
The neuro-symbolic concept learner for visual scenes uses a neural network to propose object relations and a logic module to refine those relations.
Neural networks can be integrated with symbolic knowledge graphs that contain uncertain facts, such as weighted edges or confidence scores.
Differentiable reasoning modules improve the correctness and compositional generalization of neural networks by ensuring outputs tend toward satisfying logical rules.
Wang, Ai, Lu, Su, Yu, Zhang, Zhu, and Liu (2024) provide a survey of methods for assessing neural network robustness in image recognition tasks.
Robots can use probabilistic programs to model uncertainty in their environment while using neural networks to analyze sensor data, allowing the system to perform Bayesian updating and planning that is verifiable at the program level.
Neuro-symbolic AI methods integrate the adaptive learning capabilities of neural networks with the structured, rule-based reasoning of symbolic systems to enhance system robustness, provide reliable uncertainty measures, and facilitate human intervention.
Symbolic priors are introduced during training in reasoning for learning systems, often through loss regularization, constrained optimization, or differentiable logic encodings, to guide neural networks toward semantically consistent predictions.
Symbolic reasoners in neuro-symbolic systems can verify neural network predictions against symbolic knowledge bases or logical constraints, allowing the system to flag unreliable outputs or correct predictions based on logical rules.
Manhaeve et al. demonstrated that DeepProbLog achieves better uncertainty estimation than pure neural networks on tasks requiring logical consistency.
Neuro-symbolic probabilistic models can perform joint inference, where uncertain visual detections from neural networks are validated or adjusted by logical constraints with associated confidences.
In Gaussian parametric models, the neural network estimates the mean mu(x) and variance sigma^2(x) to capture data uncertainty, represented by the formula p(y|x) = N(y | mu(x), sigma^2(x)).
Recent neuro-symbolic work integrates neural networks with probabilistic programming languages (PPLs) such as Pyro or Stan, allowing symbolic probabilistic models to include neural subroutines and output full probability distributions over answers.
The integration of symbolic reasoning within neural network frameworks offers theoretical advantages for AI robustness, including the ability to incorporate explicit knowledge, perform logical inference, leverage abstract representations, and improve interpretability.
Gaussian Process Hybrid Neural Networks combine neural networks with Gaussian processes to estimate predictive uncertainty based on sample density, with the Gaussian process component providing a measure of uncertainty that increases as the test point moves further from the training data.
Logic Tensor Networks (LTN) use continuously valued logic with fuzzy semantics to train neural networks that satisfy given logical axioms, such as enforcing that the truth degree of a premise is less than or equal to the truth degree of a conclusion.
In neuro-symbolic AI, formal logic provides precision and proofs, probabilistic models handle uncertainty and noise, and neural networks excel at learning from raw data.
Neuro-symbolic systems can potentially handle novel compositions of learned elements more effectively than monolithic neural networks by operating on discrete concepts or composing functions represented by neural modules based on symbolic structure.
Yang, Z., Ishay, A., and Lee, J. introduced NeurASP, a method for embracing neural networks into answer set programming, in their 2023 arXiv preprint arXiv:2307.07700.
The 'Neuro[Symbolic]' architecture directly encodes symbolic structures into the architecture of neural networks, using techniques like Tensor Product Representations (TPRs) and Logic Tensor Networks (LTNs) to embed logical constraints into learning dynamics.
In learning for reasoning systems, neural networks are employed to augment or enable symbolic reasoning processes by reducing the symbolic search space or abstracting structured representations from raw data.
Rasheed et al. (2024) published a study in Bioengineering titled 'Integrating convolutional neural networks with attention mechanisms for magnetic resonance imaging-based classification of brain tumors,' which explores the application of neural networks in medical imaging.
Neuro-symbolic models integrate robustness by using hybrid perception-reasoning pipelines where neural networks function as noisy sensory encoders and symbolic modules validate or correct outputs using logic-based constraints.
Goodfellow et al. demonstrated that neural networks with high accuracy on clean inputs can be extremely fragile and easily misclassify inputs when perturbed by imperceptible noise, due to an overreliance on non-robust or spurious features.
Ahmad et al. (2024) published 'Multi-feature fusion-based convolutional neural networks for EEG epileptic seizure prediction in consumer internet of things' in IEEE Transactions on Consumer Electronics, focusing on seizure prediction using neural networks.