CAREER: Physical Side-Channels Beyond Cryptography: Transforming the Side-Channel Framework for Deep Learning

{
"message": "Server Error"
}

Sponsor

Principle Investigators

Aydin Aysu

More Details

Since its inception over two decades ago, physical side-channel analysis has been exclusively focusing on cryptographic implementations. Such research efforts deal with extracting secret cryptographic keys through their correlation to power/electromagnetic (EM) signals of a target embedded device and with methods to mitigate those vulnerabilities. Cryptography, however, is not the only application domain with confidentiality needs. Indeed, machine learning (ML) is a critical new target with a need for keeping the internal ML model secret. The move towards edge intelligence pushes ML to ubiquitous embedded devices making them the primary target for physical side-channel attacks. If leaked, not only the trademark model IP will be violated, but the ML system will be more vulnerable to adversarial attacks.

The research objective of this proposal is to extend physical side-channel analysis framework to deep neural network classifiers. We will first analyze how ML model parameters such as neural network weights can leak in hardware through power/EM measurements and demonstrate this vulnerability on an actual design. We will then formulate new, algorithmic (masking-based) defenses to construct provably-secure building blocks. Our ultimate goal is to automate secure-by-design neural network implementations by integrating composable defenses through high-level synthesis tools for hardware accelerators. The resulting solutions will be ported on FPGAs to benchmark overheads and be validated by extensive side-channel vulnerability tests.

The teaching goal of this proposal is to publish the first textbook and to introduce a new course on hardware security for ML. The PI has taught concepts related to this proposal in both undergraduate and graduate curricula but will consolidate those efforts into developing the new course. The course will target both undergraduate and graduate students with no prior background on ML or hardware security and will help them develop a thorough understanding of deep neural networks, implementation attacks, and real-world deployment challenges.