AI Projects

Understanding Verification Under Distributional Shift

Project Description

Deep neural networks are now incredibly accurate on a range of benchmark tasks. However, they remain susceptible to adversarial examples, or small perturbations that change a model’s predictions. Researchers have made great progress in verifying models against specific adversarial threat models when considering test data from the same distribution as the training data. How will these verified models behave under distributional shift? You will consider several forms of distributional shift and understand how they affect verified models.

Research Question

How do verified models behave under distributional shift?

Nearest Neighbor Paper

https://arxiv.org/abs/1908.08016

Understanding Image Generation with Robust Models

Project Description

Generating images is a fundamental problem of computer vision. Modern techniques typically involve GANs, but recent work has shown that deep neural networks that are robust to specific adversaries can be used for generation (see starting paper). This work has only explored models robust to L∞ perturbations. Are models that are robust to other threat models also useful in generation? You will explore this question and how robustness can be used in generation in this project.

Research Question

How does the type of adversarial robustness affect a model’s generative properties.

Nearest Neighbor Paper

https://arxiv.org/abs/1906.09453

Are Adversarial Examples a Property of the Data?

Project Description

Deep neural networks are now incredibly accurate on a range of benchmark tasks. However, they remain susceptible to adversarial examples, or small perturbations that change a model’s predictions. Recent work has shown that training on data corrupted by an adversary can give good clean test accuracy (see starting paper and associated blog post for specific experimental details). Can the opposite phenomena happen? Is it possible to train an adversarially robust model without attacking it during training? You will explore this question.

NOTE: Due to computational constraints, this project will involve the instructor running code that you write. Please contact Daniel ASAP if you are interested in this project.

Research Question

How much are adversarial examples a property of the training dataset?

Nearest Neighbor Paper

https://arxiv.org/abs/1905.02175

Training DNNs for High Performance Inference

Project Description

Performing inference with modern DNNs can be extremely expensive. Researchers have constructed many techniques to reduce the cost of inference, ranging from model compression, knowledge distillation, and even regularization techniques for higher accuracy smaller models. Which of these techniques is the best for training DNNs for high-performance inference? Do these techniques combine? You will explore these questions in this project.

Research Question

What is the best way to train DNNs for high-performance inference?

Nearest Neighbor Paper

https://arxiv.org/abs/1510.00149

Robustness Against Adversaries via Stochasticity

Project Description

Deep neural networks are now incredibly accurate on a range of benchmark tasks. However, they remain susceptible to adversarial examples, or small perturbations that change a model’s predictions. Recent work suggests that adding stochastic noise to inputs can improve robustness against adversaries. However, two major questions remain. First, assessing robustness can be extremely difficult. Are these results correct? Second, do these results hold against unforeseen adversaries? You will answer these questions in this project.

Research Question

Can stochasticity improve robustness against adversaries?

Nearest Neighbor Paper

http://openaccess.thecvf.com/content_CVPR_2019/papers/Raff_Barrage_of_Random_Transforms_for_Adversarially_Robust_Defense_CVPR_2019_paper.pdf