Relations (1)

related 0.10 — supporting 1 fact

Large Language Models are related to emissions as [1] indicates that hallucinations and omissions are intrinsic properties of LLMs in clinical note generation, connecting their output emissions to these theoretical characteristics.

Facts (1)

Sources
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature 1 fact
claimThe study on LLM clinical note generation supports the theory that hallucinations and omissions may be intrinsic theoretical properties of current Large Language Models.