DSAdv: Dual-Stream diffusion adversarial attacks for autonomous driving perception algorithms
1 School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei 230036, China
2 Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
3 Science Island Branch, University of Science and Technology of China, Hefei 230026, China
4 Southwest Institute of Technical Physics, Chengdu 610225, China
5 School of Computer Science and Artificial Intelligence, ChaoHu University, Hefei 238000, China
6 School of Electrical and Information Engineering, Changzhou Institute of Technology, Changzhou 213002, China
  • DOI
    10.55092/rl20260007
  • Copyright
    Copyright2026 by the authors. Published by ELSP.
Abstract

Deep learning-based perception algorithms in autonomous driving are vulnerable to adversarial examples. Existing adversarial attacks methods often struggle to reconcile the trade-off between visual fidelity and high adversarial aggressiveness, either generating conspicuous noise patterns that lack visual naturalness or requiring computationally expensive iterative optimization during inference. To address these limitations, we propose DSAdv, a novel framework for high-fidelity adversarial example generation via improved LoRA fine-tuning of Latent Diffusion Models (LDMs). We introduce a decoupled Dual-Stream LoRA architecture consisting of an Adversarial Stream to inject blinding perturbations and an Integrity Keeper Stream to maintain semantic realism. These streams are dynamically fused via an Adaptive Input Gating mechanism, effectively resolving the dynamic coordination problem between visual fidelity and high adversariality. Furthermore, our method guides the LoRA fine-tuning of the pre-trained LDM by backpropagating gradients from a target detector, which significantly shortens the inference process. Extensive experiments on the NuScenes, KITTI, and the Science Island datasets validate that DSAdv generates naturalistic adversarial examples with higher transferability and visual quality compared to state-of-the-art methods like AdvDiff and PGD. The proposed framework enables the efficient generation of naturalistic and stealthy adversarial scenarios, providing a rigorous tool for evaluating and enhancing the robustness of autonomous driving systems.

Keywords

autonomous driving; adversarial attack; diffusion models; LoRA; object detection

Preview