claim
Large Language Models often lack exposure to rare diseases during training, which leads to hallucinations when the models generate diagnostic insights for those conditions.

Authors

Sources

Referenced by nodes (1)