procedure
An 'AI Package Hallucination attack' is an attack vector where malicious actors use Large Language Models (LLMs) to generate and register non-existent but plausible package names, then inject malicious code into those packages to be included in OSS registries like npm or PyPI.

Authors

Sources

Referenced by nodes (2)