Etd

Securing Physical Layer Passband Signals Via Machine Learning

Public

Downloadable Content

open in viewer

Machine Learning (ML)-based Wireless Sensor Networks (WSNs) need secure algorithms to avoid human and financial costs in vehicular, agricultural, and national defense applications. Thus, there are several challenges within the context of ML-based WSNs that require novel solutions from the technical community. For instance, there does not currently exist a stationary wireless localization algorithm capable of classifying the source of signals detected by WSNs. Another challenge involves the active obfuscation of transmission modulation class from eavesdropping ML-based WSNs, where power inefficiency and the lack of a Quality-of-Service (QoS) guarantee makes the realization of a practical solution difficult to achieve. Finally, implementations of obfuscation techniques and their countermeasures within a WSN application do not possess the same level of advances and sophistication when compared to other domains such as image processing, which have been employing ML-based solutions for several decades. To address these challenges, this dissertation presents three separate (yet connected) novel contributions. First, we present a novel localization framework that is self-organizing, opportunistic, and jointly localizes stationary transmitters while classifying signal detections. Experimental results from this framework demonstrated the ability to locate an unknown number of low-power beacons with 1.2-meter error bias. Second, we devised two new constraints on ML solutions to help increase power efficiency and guarantee a minimum QoS in WSNs, namely, constructive and Bandwidth Equivalency (BWE) perturbation constraints. Based on these novel constraints, we showed an improvement of up to 80% with respect to the Bit Error Rate (BER) and a reduction of up to 60% with respect to adversarial classification precision. Finally, we implemented the first-ever real-world hardware experimentation of an adversary-trained, ML-based, modulation classifying sensor, and evaluated a case study involving wireless adversarial training, regularization, model width, label leaking, parallel and cascade variations, and iterative attacks. Results from this case study using this hardware setup showed that strong Gradient Sign Method (GSM) attacks can reduce classification precision down to guessing precision, but that moderate strength attacks can be resisted by up to 35% while simultaneously increasing non-adversarial precision by up to 1%.

Creator
Contributors
Degree
Unit
Publisher
Identifier
  • etd-47031
Keyword
Advisor
Committee
Defense date
Year
  • 2022
Sponsor
Date created
  • 2022-01-25
Resource type
Rights statement
License
Last modified
  • 2023-12-05

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/8910jx76p