claim
Incorporating clinically validated knowledge into LLMs enhances user-level explainability by allowing the model to base decisions on clinical concepts that are comprehensible and actionable for clinicians, potentially enabling the LLM to follow a clinician’s decision-making process through NeuroSymbolic AI, as proposed by Sheth, Roy, and Gaur (2023).
Authors
Sources
- Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- Neuro-symbolic artificial intelligence concept