claim
Large Language Models can hallucinate patient information, history, and symptoms on clinical notes, creating discrepancies that do not align with the original clinical notes.

Authors

Sources

Referenced by nodes (2)