by
Pin-Yu Chen
IBM Research, Yorktown Heights

Despite the fact of achieving high standard accuracy in a variety of machine learning tasks, deep learning models built upon neural networks have recently been identified having the issue of lacking adversarial robustness. The decision making of well-trained deep learning models can be easily falsified and manipulated, resulting in ever-increasing concerns in safety-critical and security-sensitive applications requiring certified robustness and guaranteed reliability.

This tutorial will provide an overview of recent advances in the research of adversarial robustness, featuring both comprehensive research topics and technical depth. We will cover three fundamental pillars in adversarial robustness: attack, defense and verification. Attack refers to efficient generation of adversarial examples for robustness assessment under different attack assumptions (e.g., white-box or black-box attacks). Defense refers to adversary detection and robust training algorithms to enhance model robustness. Verification refers to attack-agnostic metrics and certification algorithms for proper evaluation of adversarial robustness and standardization. For each pillar, we will emphasize the tight connection between signal processing and the research in adversarial robustness, ranging from fundamental techniques such as first-order and zero-order optimization, minimax optimization, geometric analysis, model compression, data filtering and quantization, subspace analysis, active sampling, frequency component analysis to specific applications such as computer vision, automatic speech recognition, natural language processing and data regression.

This tutorial aims to serve as a short lecture for researchers and students to access the emergent filed of adversarial robustness from the viewpoint of signal processing.

%d bloggers like this: