Relations (1)
Facts (4)
Sources
Survey and analysis of hallucinations in large language models frontiersin.org 1 fact
claimWeidinger et al. (2022) assert that the stakes of hallucination in high-risk domains such as medicine, law, and education are far higher than in open-domain tasks.
Medical Hallucination in Foundation Models and Their ... medrxiv.org 1 fact
claimHallucination or confabulation in Large Language Models is a concern across various domains, including finance, legal, code generation, and education.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org 1 fact
claimHallucinations in Large Language Models (LLMs) are documented across multiple domains, including finance, legal, code generation, and education.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com 1 fact
claimHallucinations in large language models pose risks in high-stakes domains, such as misdiagnosing conditions in healthcare, fabricating legal precedents, generating fake market data in finance, and providing incorrect facts in education.