- Intriguing properties of neural networks, C. Szegedy et al., arxiv 2014
- Explaining and Harnessing Adversarial Examples, I. Goodfellow et al., ICLR 2015
- DeepFool: a simple and accurate method to fool deep neural networks, S. Moosavi-Dezfooli et al., CVPR 2016
- The Limitations of Deep Learning in Adversarial Settings, N. Papernot et al., ESSP 2016
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, N. Papernot et al., arxiv 2016
- Adversarial Examples In The Physical World, A. Kurakin et al., ICLR workshop 2017
- Delving into Transferable Adversarial Examples and Black-box Attacks Liu et al., ICLR 2017
- Towards Evaluating the Robustness of Neural Networks N. Carlini et al., SSP 2017
- Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, N. Papernot et al., Asia CCS 2017
- Adversarial Machine Learning At Scale, A. Kurakin et al., ICLR 2017
- Ensemble Adversarial Training: Attacks and Defenses, F. Tramèr et al., arxiv 2017
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, N. Papernot et al., SSP 2016
- Extending Defensive Distillation, N. Papernot et al., arxiv 2017
- Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, A. Nguyen et al., CVPR 2015
- Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods, N. Carlini et al., arxiv 2017
CAAD Contest: https://en.caad.geekpwn.org/
Randomization and Transforming Technique
Final Submission at 10:08 a.m. CT on Aug 31, 2018
More Adversarial Attack/Defense experiemnts in the future.
Most of the code in my submission are folked from [1] [2] [3]
6th of adversarial attack track in the second development round using simply momentum-based FGSM on ensemble models [misclassification rate:0.7690]