claim
Large language models may hallucinate when they assume a level of domain-specific knowledge or cultural context that is not universally shared.
Authors
Sources
- LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org via serper
Referenced by nodes (3)
- Large Language Models concept
- Domain-Specific Knowledge concept
- cultural context concept