Tuesday, November 27, 2018. 12:00 PM. GHC 6115.
Yaodong Yu -- Adversarial Defenses and Attacks: A Case Study on NIPS 2018 Adversarial Vision Challenge
Abstract: In recent years, we have witnessed the success of machine learning, especially deep learning, in various areas. Despite widespread adoption, recent studies have shown that machine learning models are vulnerable to adversarial examples, i.e., very small changes to inputs can cause machine learning models to make mistakes. Thus, understanding and defending against adversarial examples is crucial to the AI security and interpretability concerns.
In this talk, we will focus on how to train robust models and generate adversarial examples in the NIPS 2018 Adversarial Vision Challenge. In the first part of the talk, we will briefly introduce basic adversarial defense and attack techniques as well as the rule in the NIPS 2018 Adversarial Vision Challenge. In the second part of the talk, we will present several take-home messages on how to train robust models efficiently on large-scale datasets (Tiny ImageNet) by using deep convolutional neural networks (e.g., ResNet152). In addition, we will present our approach on generating adversarial examples for targeted attacks.
Based on joint work with Hongyang Zhang (CMU), Susu Xu (CMU), Hongbao Zhang (Petuum), Pengtao Xie (Petuum) and Eric P. Xing (CMU and Petuum).