concept

clinical notes

Also known as: clinical notes, clinical note

Facts (16)

Sources
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature May 13, 2025 13 facts
measurementStudies estimate that human-generated clinical notes contain, on average, at least one error and four omissions.
measurementOut of 12,999 sentences in 450 clinical notes, 191 sentences (1.47%) contained hallucinations.
procedureThe evaluation framework for LLM clinical notes involves recruiting two medical doctors to review each sentence in a clinical note against the source transcript; sentences not evidenced in the transcript are labeled as hallucinations, while clinically relevant transcript sentences absent from the note are labeled as omissions.
claimNegation hallucinations, which contradict information from the consultation, accounted for 30% of total hallucinations and appeared primarily in the planning section of clinical notes.
claimAbacha et al. propose evaluating the quality of clinical notes generated by LLMs using automated metrics.
claimExperiment 7 introduced a first-person perspective in generated clinical notes.
measurementMajor omissions in clinical notes were most common in the 'current issues' section (55%), followed by the 'PMFS' section (35%), and the 'Info and Plan' sections (10%).
procedureHallucinations and omissions in clinical notes are classified as 'Major' if they could change patient diagnosis or management if left uncorrected, and 'minor' otherwise.
measurementHallucinations in clinical notes occurred most frequently in the 'Plan' section, accounting for 20% of all hallucinations.
procedureExperiment 18 assessed the performance of clinical notes using the publicly available ACI Bench dataset.
claimIn Experiment 5, incorporating a chain-of-thought prompt to extract facts from the transcript (atomisation) before generating the clinical note led to an increase in major hallucinations and omissions.
measurementMajor hallucinations occurred most commonly in the Plan (21%), Assessment (10.5%), and Symptoms (5.2%) sections of the clinical notes.
claimPerformance in generating clinical notes was significantly improved by using structured prompting, including a style update, and instructing the model to output the status 'unknown' when information was missing from the transcript.
Enterprise AI Requires the Fusion of LLM and Knowledge Graph linkedin.com Jacob Seric · LinkedIn Jan 2, 2025 1 fact
claimClinical decision-making in healthcare faces three primary challenges: high data volume (including evidence and patient data), the prevalence of unstructured data (such as clinical notes, imaging reports, and discharge summaries), and non-deterministic, judgment-driven decision-making.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 1 fact
claimLarge Language Models can hallucinate patient information, history, and symptoms on clinical notes, creating discrepancies that do not align with the original clinical notes.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv Nov 2, 2025 1 fact
claimLLMs can hallucinate patient information, history, or symptoms when generating or summarizing clinical notes, resulting in content that diverges from the source record.