claim
Targeted robustness evaluates a machine learning model’s ability to avoid being steered toward a specific, attacker-chosen target label, which complements untargeted notions that only measure any form of misclassification.

Authors

Sources

Referenced by nodes (1)