Abstract:
Deep learning has made significant progress in signal modulation classification tasks. However, in practical applications, deep neural networks have demonstrated inherent vulnerabilities, making them susceptible to adversarial attacks. Adversarial examples, created by adding subtle perturbations to inputs, can cause models to produce incorrect classification results, posing serious threats and risks to the security of communication systems. This paper proposes a novel defense method, Hybrid Signal Adversarial Training (HSAT), based on the adversarial training framework to enhance the robustness of signal modulation classification models. To address the issues of limited training data and insufficient network representation capabilities resulting from adversarial training, a mixed signal data augmentation strategy based on linear interpolation is proposed to improve model performance. Additionally, a maximum margin loss function is employed to replace the cross-entropy loss function, thereby increasing the distance of the model's decision boundaries and enhancing robustness against perturbed inputs. Through validation against current state-of-the-art adversarial attack algorithms, the proposed method demonstrates an average improvement of 7.07% in adversarial robustness across three attack algorithms, with only a 1.61% decrease in standard classification accuracy compared to traditional adversarial training.