claim
Neural networks that convert visual input features into intermediate representations via weights and activation functions suffer from a lack of direct observability and testability, which limits their interpretability despite the use of logical expressions in the reasoning process.

Authors

Sources

Referenced by nodes (1)