claim
The use of Large Language Models (LLMs) in OSS security poses challenges, including the potential for LLMs to misinterpret security reports, generate incomplete or inaccurate patches leading to regressions, and the uninterpretability of LLM decision-making processes which causes hesitance to trust AI-generated suggestions.
Authors
Sources
- A Mixed-Methods Study of Open-Source Software Maintainers On ... arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept