concept

AI/LLM tools

Also known as: AI/LLM tools, AI/LLM

Facts (13)

Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 12 facts
procedureThe survey conducted by the authors of 'Medical Hallucination in Foundation Models and Their ...' includes questions asking respondents to rate their trust in AI/LLM answers, the frequency of correctness of AI/LLM answers, and the frequency of encountering AI hallucinations on a scale of 1 to 5.
claimSurvey respondents identified limitations in training data and model architectures as key factors contributing to medical hallucinations in AI/LLM tools.
procedureThe survey conducted by the authors of 'Medical Hallucination in Foundation Models and Their ...' includes questions regarding the respondent's field of work, years of experience, area of practice, region of practice, highest degree obtained, and frequency of AI/LLM tool usage.
measurementOf 61 survey respondents, 21 believed AI/LLM outputs were often correct, 18 stated they were sometimes correct, and 6 felt they were rarely correct.
claimIdentified limitations of current AI/LLM tools in the medical field include accuracy issues, lack of domain-specific knowledge, difficulty in explaining AI decisions, privacy and data security concerns, integration with existing workflows, lack of standardization or validation of AI tools, and ethical concerns such as bias or job displacement.
perspectiveSurvey respondents identified ethical considerations, privacy, and user education as essential components for the responsible implementation of AI/LLM tools.
claimMedical hallucinations are defined as factually incorrect yet plausible outputs with medical relevance generated by AI/LLM tools.
perspectiveSurvey respondents emphasized that enhancing accuracy, explainability, and workflow integration are essential for future AI/LLM tools.
procedureMedical professionals verify AI/LLM information when encountering hallucinations by cross-referencing with other sources, consulting colleagues or experts, ignoring the output, or refraining from using the AI/LLM for similar tasks.
accountThe authors of 'Medical Hallucination in Foundation Models and Their ...' conducted a clinician survey to understand healthcare professionals' perceptions and experiences regarding AI/LLM adoption and the challenges of medical hallucinations in practice.
measurementAmong survey respondents, 40 used AI/LLM tools daily, 9 used them several times per week, 13 used them a few times a month, and 13 reported rare or no usage.
measurementRegarding trust in AI/LLM outputs, 30 survey respondents expressed high trust, 25 reported moderate trust, and 12 indicated low trust.
bureado/awesome-software-supply-chain-security - GitHub github.com GitHub 1 fact
referenceThe 'cyfinoid/aibommaker' project is a client-side web tool that analyzes GitHub repositories for AI/LLM usage and generates AI Bills of Materials (AIBOMs) in CycloneDX 1.7 and SPDX 3.0.1 formats, including detection of hardware, infrastructure, and governance components.