Relations (1)

related 2.32 — strongly supporting 4 facts

Large Language Models are integrated into education to provide personalized learning guidance [1], though their tendency to hallucinate poses significant risks by potentially providing incorrect information in educational settings [2], [3], and [4].

Facts (4)

Sources
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 1 fact
claimIn the field of education, knowledge graphs help organize and visualize complex learning content, while integration with large language models enables intelligent systems to provide precise learning guidance and personalized recommendations.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
claimHallucination or confabulation in Large Language Models is a concern across various domains, including finance, legal, code generation, and education.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv 1 fact
claimHallucinations in Large Language Models (LLMs) are documented across multiple domains, including finance, legal, code generation, and education.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com CloudThat 1 fact
claimHallucinations in large language models pose risks in high-stakes domains, such as misdiagnosing conditions in healthcare, fabricating legal precedents, generating fake market data in finance, and providing incorrect facts in education.