claim
Foundation models generate hallucinations because their autoregressive training objectives prioritize token-likelihood optimization over epistemic accuracy, leading to overconfidence and poorly calibrated uncertainty.

Authors

Sources

Referenced by nodes (2)