Adversarial Machine Learning: A Systematic Survey from the Lifecycle Perspective

Two comprehensive surveys unifying adversarial machine learning attacks and defenses across the ML lifecycle, providing a hierarchical taxonomy and mathematical framework for understanding AML attacks (backdoor attacks, weight attacks, and adversarial example attacks) and defenses (different stages include pre-training, training, post-training, deployment, and inference stage).

Abstract

Adversarial machine learning (AML) investigates vulnerabilities that cause machine learning systems to produce predictions deviating from human expectations. We unify the landscape of attacks and defenses through a lifecycle-aware perspective spanning pre-training, training, post-training, deployment, and inference. On the attack side, we synthesize three primary paradigms—backdoor attacks (arising through data poisoning or training control, activated at inference), weight attacks (injected post-training or at deployment via bit-flips), and adversarial examples (crafted at inference)—and clarify their connections and differences across stages. On the defense side, we present a coherent taxonomy aligned to the same lifecycle: pre-training (robust model architectures; poisoned data detection), training (adversarial training; secure centralized and decentralized training), post-training (backdoor detection, target-class detection, and mitigation/removal), deployment (model enhancement and fingerprint verification against weight attacks), and inference (poisoned-query detection, robust prediction, adversarial-query/sequence detection, and randomized/dynamic inference). To facilitate reproducible evaluation under this unified perspective, we further release two open-source benchmarks: BackdoorBench and BlackBoxBench that provide standardized datasets, protocols, and metrics mapped to the lifecycle stages, enabling apples-to-apples comparisons and stress-testing of cross-stage strategies.

These works bridge this gap through three key contributions: (1) offers formal, lifecycle-aware definitions of AML threats, (2) organizes both attack methodologies and stage-specific defenses into hierarchical taxonomies while elucidating inter-paradigm relationships, and (3) extends analysis to generative and multimodal models as well as beneficial applications.

Our findings emphasize that piecemeal, single-stage defenses are insufficient against coordinated, multi-stage attacks; instead, holistic, cross-stage strategies and lifecycle-aware monitoring are essential. We distill comparative insights and trade-offs (effectiveness, cost, clean-accuracy impact, scalability, generality, evasion difficulty), highlight practical considerations in safety-critical domains, and chart concrete research directions toward adaptive, automated, and comprehensive AML defenses.

Taxonomy of Adversarial Machine Learning

Collection of Papers

BackdoorBench Framework

BlackboxBench Framework

BibTeX

@article{aml-attack-ijcv-2025,
  title={Attacks in Adversarial Machine Learning: A Systematic Survey from the Lifecycle Perspective},
  author={Wu, Baoyuan and Zhu, Zihao and Liu, Li and Liu, Qingshan and He, Zhaofeng and Lyu, Siwei},
  journal={IJCV},
  year={2025}
}
        
@article{aml-defense-ijcv-2025,
  title={Defenses in Adversarial Machine Learning: A Systematic Survey from the Lifecycle Perspective},
  author={Wu, Baoyuan and Zhu, Mingli and Zheng, Meixi and Zhu, Zihao and Wei, Shaokui and Zhang, Mingda and Chen, Hongrui and Yuan, Danni and Liu, Li and Liu, Qingshan},
  journal={TPAMI},
  year={2025}
}
        
@article{zheng2025blackboxbench,
  title={Blackboxbench: A comprehensive benchmark of black-box adversarial attacks},
  author={Zheng, Meixi and Yan, Xuanchen and Zhu, Zihao and Chen, Hongrui and Wu, Baoyuan},
  journal={TPAMI},
  year={2025}
}
        
@article{wu2025backdoorbench,
  title={Backdoorbench: A comprehensive benchmark and analysis of backdoor learning},
  author={Wu, Baoyuan and Chen, Hongrui and Zhang, Mingda and Zhu, Zihao and Wei, Shaokui and Yuan, Danni and Zhu, Mingli and Wang, Ruotong and Liu, Li and Shen, Chao},
  journal={IJCV},
  year={2025}
}