Relations (1)

related 1.00 — strongly supporting 1 fact

Both hallucination and In-Context Learning are identified as emergent phenomena that manifest in Large Language Models as they scale, as described in [1].

Facts (1)

Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 1 fact
claimLarge Language Models exhibit emergent phenomena not found in smaller models, including hallucination, in-context learning (ICL), scaling laws, and sudden 'aha moments' during training.