claim
Neural networks that convert visual input features into intermediate representations via weights and activation functions suffer from a lack of direct observability and testability, which limits their interpretability despite the use of logical expressions in the reasoning process.
Authors
Sources
- Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org via serper
Referenced by nodes (1)
- artificial neural networks concept