Explainable Artificial Intelligence (XAI)
Also known as: XAI, explainable artificial intelligence, Explainable AI, Explainable AI (XAI), Explainable Artificial Intelligence
Facts (19)
Sources
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org Feb 16, 2025 5 facts
claimExplainable AI (XAI) enhances accountability and supports ethical AI adoption by providing insights into model behavior, ensuring that autonomous systems remain interpretable and accountable in sensitive and dynamic environments.
referenceAlejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. published 'Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai' in Information Fusion in 2020.
claimExplainable AI (XAI) systems often combine neural models for feature extraction with symbolic frameworks to produce explanations that are easily understood by humans.
claimExplainable AI (XAI) focuses on making AI systems transparent and interpretable.
claimThe growing autonomy of agentic AI systems underscores the importance of Explainable AI (XAI) to ensure transparency and trust.
Neuro-Symbolic AI: Explainability, Challenges & Future Trends linkedin.com Dec 15, 2025 4 facts
claimExplainable AI (XAI) systems require transparency to ensure trust and accountability, particularly in sectors such as healthcare and finance.
claimLIME and SHAP are tools used in Explainable AI (XAI) to provide insights into complex model behavior.
claimExplainable AI (XAI) addresses the need for transparency in AI systems across sectors such as healthcare and finance.
claimSaliency Maps 2.0 is an Explainable AI (XAI) technique that visualizes the internal workings of neural networks by employing a fusion of saliency maps and gradient-based attribution methods.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Dec 9, 2025 2 facts
referenceLecue (2020) analyzed the role of knowledge graphs in explainable AI.
claimRaees, Meijerink, Lykourentzou, Khan, and Papangelis published 'From explainable to interactive AI: a literature review on current trends in human-AI interaction' in the International Journal of Human-Computer Studies in 2024.
LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org May 10, 2024 1 fact
procedurePreventing large language model hallucinations requires a multifaceted approach including improving training data quality, developing context-aware algorithms, ensuring human oversight, and creating transparent and explainable AI models.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 1 fact
referenceSheth et al. (2021) discussed 'knowledge-intensive language understanding' as a framework for explainable AI.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 1 fact
referenceThe paper 'Explainable ai: a review of machine learning interpretability methods' was published in Entropy 23 (1), p. 18.
Knowledge Graphs vs RAG: When to Use Each for AI in 2026 - Atlan atlan.com Feb 12, 2026 1 fact
claimKnowledge graphs provide explainable AI by generating reasoning chains, such as showing the path 'Customer → reduced_usage_by_40% → missed_invoices → support_escalations → similar_customers_churned' to explain a churn risk assessment.
Recent breakthroughs in the valorization of lignocellulosic biomass ... pubs.rsc.org Jun 7, 2025 1 fact
claimBashir et al. developed a hybrid machine learning model using explainable artificial intelligence to determine the optimal water-to-binder ratio for improving the durability, performance, and sustainability of concrete.
Papers - Dr Vaishak Belle vaishakbelle.github.io 1 fact
referenceD. Hemment, D. Murray-Rust, Vaishak Belle, R. Aylett, M. Vidmar, and F. Broz authored 'Experiential AI: Between Arts and Explainable AI', published in Leonardo in 2024.
Call for Papers: Special Session on KR and Machine Learning kr.org 1 fact
claimThe Special Session on KR and Machine Learning at KR2022 welcomes papers on topics including learning symbolic knowledge (ontologies, knowledge graphs, action theories, commonsense knowledge, spatial/temporal theories, preference/causal models), logic-based/relational learning algorithms, machine-learning driven reasoning, neural-symbolic learning, statistical relational learning, multi-agent learning, symbolic reinforcement learning, learning symbolic abstractions from unstructured data, explainable AI, expressive power of learning representations, knowledge-driven natural language understanding and dialogue, knowledge-driven decision making, knowledge-driven intelligent systems for IoT and cybersecurity, and architectures combining data-driven techniques with formal reasoning.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org Nov 7, 2024 1 fact
claimThe field of explainable artificial intelligence (XAI) currently lacks a precise computational theoretical framework to explain how humans understand AI explanations, which limits the development of theoretical construction for XAI systems.