claim
Large Language Models are prone to generating factually incorrect information ('hallucinations'), struggle with processing extended contexts, and suffer from catastrophic forgetting, where previously learned knowledge is lost during new training.
Authors
Sources
- KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept