Adversarial Examples and their implications

In this article, we are going to talk about adversarial attacks and discuss their implications for deep learning model and their security.


Autoencoders (AE) are a family of neural networks for which the input is the same as the output*. They work by compressing the input into a latent-spacerepresentation, and then reconstructing the output from this representation.

Powered by

Up ↑