claim
Foundation models generate hallucinations because their autoregressive training objectives prioritize token-likelihood optimization over epistemic accuracy, leading to overconfidence and poorly calibrated uncertainty.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (2)
- hallucination concept
- foundation models concept