Deep learning-based perception algorithms in autonomous driving are vulnerable to adversarial examples. Existing adversarial attacks methods often struggle to reconcile the trade-off between visual fidelity and high adversarial aggressiveness, either generating conspicuous noise patterns that lack visual naturalness or requiring computationally expensive iterative optimization during inference. To address these limitations, we propose DSAdv, a novel framework for high-fidelity adversarial example generation via improved LoRA fine-tuning of Latent Diffusion Models (LDMs). We introduce a decoupled Dual-Stream LoRA architecture consisting of an Adversarial Stream to inject blinding perturbations and an Integrity Keeper Stream to maintain semantic realism. These streams are dynamically fused via an Adaptive Input Gating mechanism, effectively resolving the dynamic coordination problem between visual fidelity and high adversariality. Furthermore, our method guides the LoRA fine-tuning of the pre-trained LDM by backpropagating gradients from a target detector, which significantly shortens the inference process. Extensive experiments on the NuScenes, KITTI, and the Science Island datasets validate that DSAdv generates naturalistic adversarial examples with higher transferability and visual quality compared to state-of-the-art methods like AdvDiff and PGD. The proposed framework enables the efficient generation of naturalistic and stealthy adversarial scenarios, providing a rigorous tool for evaluating and enhancing the robustness of autonomous driving systems.
autonomous driving; adversarial attack; diffusion models; LoRA; object detection