claim
Large Language Models are prone to generating factually incorrect information ('hallucinations'), struggle with processing extended contexts, and suffer from catastrophic forgetting, where previously learned knowledge is lost during new training.

Authors

Sources

Referenced by nodes (2)