Thursday, January 18, 2018

Knowing is Half the Battle



Psychedelic toasters fool image recognition tech
Jan 2018, BBC

source: Adversarial Patch
Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer, Dec 2017

The battle has begun - between us and the robots. If, by "robots", you mean recognition algorithms. We're already coming up with ways to trick these programs into seeing things that aren't there, and to camouflage things that are there.

"Adversarial images" are coming up in neural net news a lot these days. This is where an image, an adversarial image, can trick recognition software into seeing something that - as far as humans can tell - is not really there. And in this new piece, they can be tricked into not seeing something that is there.  [15, 5]

In their paper seen above, Adversarial Patch, the authors describe one of these adversarial images as "carefully chosen inputs that cause the network to change output without a visible change to a human".

In one of these tricky pictures, each pixel is changed very slightly to make something basically unrecognizable to humans, but super-recognizable to a image-recognizing neural network. One of the methods of finding or creating an adversarial image is called DeepFool, after Google's DeepMind. [10]

This approach can even be extended to 3D, where slight changes ("adversarial perturbations") to a 3D-printed object can make it "look" like something else to the computer. (see the Turtle-Rifle, where a 3D printed turtle was adversarially perturbed to look like a rifle - to you and I this would still look like a turtle, but to many image-rec algos out there it is seen as a rifle instead). [3]

The best example here is what's called adversarial glasses which fool face recognition algorithms. Wave to the NSA!  [13]

So what's the problem?

Unfortunately, the problems arise when a stop sign is adversarially perturbed. These slight changes, made by a teenage prankster in a calculated attack of graffiti-like behavior, can make it such that we see a stop sign, but our automatic car does not. [4]

This newest approach makes it such that the super-image (my own term here, not the researchers') can be printed out and placed near the image or object of interest, and thus fool the system meant to recognize the object.

From the same paper above, we see this picture of a banana with what appears as a funny looking psychadelic-metallic picutre next to it. The funny picture was generated by combining many pictures of toasters into one super-toaster picture.

The patch can be really small, again such that humans don't notice it, and yet it will distract, or attract undue attention from, the image recognition system.

As stated, the battle between us and the robots has begun. But really, it's the same old story, and as it always will be - the battle is really between us and ourselves, only this time using the robots to fight each other.


Post Script

I can't help but think about how graffiti/hackers in the mid-2000's started tagging major targets not in the real world but on Google maps. Tag the White House in real life? Probably not. But tag it in the virtual world and it has a pretty similar effect, perhaps even moreso.

Although this adversarial tech is with scientists in Google labs right now, tomorrow it will be with kids on the street.


Notes

I kept the notes from the original paper; there's a lot going on here.

[3] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok. Synthesizing robust adversarial examples.
arXiv preprint arXiv:1707.07397, 2017.
[4] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song.
Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945,
2017.
[5] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples.
arXiv preprint arXiv:1412.6572, 2014.
[10] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to
fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 2574–2582, 2016.
[11] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of
deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European
Symposium on, pages 372–387. IEEE, 2016.
[13] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: Real and
stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC
Conference on Computer and Communications Security, pages 1528–1540. ACM, 2016.
[15] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing
properties of neural networks. In International Conference on Learning Representations,
2014.

No comments:

Post a Comment