claim
Svenstrup et al. (2015) observe that Large Language Models often lack exposure to rare diseases during training, which leads to hallucinations when the models generate diagnostic insights.

Authors

Sources

Referenced by nodes (2)