claim
Adversaries can poison AI training datasets through cyber operations that mislabel data or introduce hidden triggers, causing models to misbehave.

Authors

Sources

Referenced by nodes (1)