Intriguing properties of neural networks (2013)
Explaining and harnessing adversarial examples (2014)
Towards evaluating the robustness of neural networks (2017)
Synthesizing robust adversarial examples (2018)
Adversarial Examples - A Complete Characterisation of the Phenomenon (2019)
One Pixel Attack for Fooling Deep Neural Networks (2019)
(Lecture 16 | Adversarial Examples and Adversarial Training
Is "Adversarial Examples" an Adversarial Example?
https://www.youtube.com/watch?v=zQ_uMenoBCk&feature=youtu.be