Relations (1)

related 4.00 — strongly supporting 15 facts

Large Language Models are increasingly applied in healthcare for clinical decision support and research [1], though their integration presents significant risks such as hallucinations and lack of determinism {fact:1, fact:2, fact:6, fact:8}. Consequently, researchers are developing frameworks and methodologies to evaluate and refine these models to ensure safety and reliability in medical settings {fact:11, fact:12, fact:13, fact:15}.

Facts (15)

Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 4 facts
claimThe integration of large language models into healthcare introduces risks to patient care, including the potential for hallucinated outputs to influence therapeutic choices, diagnostic pathways, and patient-provider communication, as noted by Topol (2019), Mehta and Devarakonda (2018), and Hata et al. (2022).
claimSubtle or plausible-sounding misinformation generated by LLMs in healthcare can influence diagnostic reasoning, therapeutic recommendations, or patient counseling, as noted by Miles-Jay et al. (2023), Xia et al. (2024), Mehta and Devarakonda (2018), and Mohammadi et al. (2023).
claimFoundation models, including Large Language Models (LLM) and Large Vision Language Models (VLM), are used in healthcare for clinical decision support, medical research, and improving healthcare quality and safety.
referenceA survey by Nazi and Peng (2024) provides a comprehensive review of LLMs in healthcare, highlighting that domain-specific adaptations like instruction tuning and retrieval-augmented generation can enhance patient outcomes and streamline medical knowledge dissemination, while noting persistent challenges regarding reliability, interpretability, and hallucination risk.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature 3 facts
claimLarge Language Models can output unfactual or unfaithful text with high degrees of confidence, which poses significant risks in high-stakes environments like healthcare.
referenceThe article 'Evaluating large language models for use in healthcare: a framework for translational value assessment' published in Informatics in Medicine Unlocked (2023) proposes a framework for assessing the value of LLMs in healthcare.
referenceThe article 'A framework for human evaluation of large language models in healthcare derived from literature review' published in NPJ Digital Medicine (2024) establishes a framework for human-based assessment of LLMs in healthcare.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer 2 facts
claimThe integration of knowledge graphs with LLMs enhances diagnostic tools and personalized medicine in healthcare, improves risk assessment and fraud detection in finance, and enhances recommendation engines and customer service in e-commerce.
claimThe integration of Large Language Models (LLMs) and Knowledge Graphs (KGs) supports advanced applications in healthcare, finance, and e-commerce by enabling real-time data analysis and decision-making processes.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org arXiv 1 fact
referenceThe paper 'Current applications and challenges in large language models for patient care: a systematic review' examines the use of large language models in healthcare settings.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv 1 fact
claimHallucinations in medical Large Language Models often arise from a confluence of factors relating to data, model architecture, and the unique complexities of healthcare.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 1 fact
perspectiveThe authors argue that the necessity of establishing a robust methodology for ensuring consistency, reliability, explainability, and safety is critical before deploying Large Language Models in sensitive domains such as healthcare and well-being.
Role of Open Source Software in Rise of AI nutanix.com Nutanix 1 fact
claimCurrent large language models (LLMs) lack the level of determinism required by some enterprises, particularly in regulated industries like finance and healthcare, necessitating further model refinement.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com CloudThat 1 fact
claimHallucinations in large language models pose risks in high-stakes domains, such as misdiagnosing conditions in healthcare, fabricating legal precedents, generating fake market data in finance, and providing incorrect facts in education.
Unknown source 1 fact
claimRetrieval-Augmented Generation (RAG), knowledge graphs, Large Language Models (LLMs), and Artificial Intelligence (AI) are increasingly being applied in knowledge-heavy industries, such as healthcare.