Room: EV001.162, Bldg: EV001.162, 1515 St. Catherine St. West, Montreal, Quebec H3G 2W1, Montreal, Quebec, Canada, H3G 2W1
Adversarial Machine Learning Attacks on RF Signal Classifiers
Room: EV001.162, Bldg: EV001.162, 1515 St. Catherine St. West, Montreal, Quebec H3G 2W1, Montreal, Quebec, Canada, H3G 2W1Abstract Machine learning (ML) has recently been applied for the classification of radio frequency (RF) signals. One use case of interest relates to the discernment between different wireless protocols that operate over a shared and potentially contested spectrum. Although highly accurate classifiers have been developed for various wireless scenarios, research points to the vulnerability of such classifiers to adversarial machine learning (AML) attacks. In one such attack, a surrogate deep neural network (DNN) model is trained by the attacker to produce intelligently crafted low power “perturbations” that degrade the classification accuracy of the legitimate classifier. In this talk, I will first present several novel DNN protocol classifiers that we designed for a shared spectrum environment. These classifiers performed quite well in both simulations and OTA experimentation, considering benign (non-adversarial) noise. I will then present several AML techniques that an attacker may use to generate low power perturbations. When combined with a legitimate signal, these perturbations are shown to uniformly degrade the classification accuracy, even in the very high SNR regime. Different attack models are studied, depending on how much information the attacker has about the defender’s classifier. Finally, I will discuss possible defense mechanisms as well as other research efforts related to detection of adversarial transmissions. Co-sponsored by: Dr. Jun Yan Room: EV001.162, Bldg: EV001.162, 1515 St. Catherine St. West, Montreal, Quebec H3G 2W1, Montreal, Quebec, Canada, H3G 2W1