claim
Hallucinations in Large Language Models create risks for misinformation, reduced user trust, and accountability gaps (Bommasani et al., 2021; Weidinger et al., 2022).

Authors

Sources

Referenced by nodes (2)