procedure
An 'AI Package Hallucination attack' is an attack vector where malicious actors use Large Language Models (LLMs) to generate and register non-existent but plausible package names, then inject malicious code into those packages to be included in OSS registries like npm or PyPI.
Authors
Sources
- Cyber Insights 2025: Open Source and Software Supply Chain ... www.securityweek.com via serper
Referenced by nodes (2)
- Large Language Models concept
- npm concept