claim
Large language models (LLMs) present unique risks including hallucination, prompt sensitivity, and limited explainability, which require governance and oversight.

Authors

Sources

Referenced by nodes (2)