claim
Adversaries can attack AI systems by targeting model weights through espionage and hacking, poisoning training datasets, blinding or spoofing sensors on intelligence, surveillance, and reconnaissance (ISR) platforms, disabling data relays, or physically damaging hardware in data centers, cables, satellites, or uncrewed systems.

Authors

Sources

Referenced by nodes (1)