claim
Large language models are vulnerable to adversarial attacks or manipulation, which can cause them to generate hallucinated text.

Authors

Sources

Referenced by nodes (2)