reference
A survey by Nazi and Peng (2024) provides a comprehensive review of LLMs in healthcare, highlighting that domain-specific adaptations like instruction tuning and retrieval-augmented generation can enhance patient outcomes and streamline medical knowledge dissemination, while noting persistent challenges regarding reliability, interpretability, and hallucination risk.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (4)
- Large Language Models concept
- Retrieval-Augmented Generation (RAG) concept
- health care concept
- instruction tuning concept