Monday, February 17, 2020

Deep Creep

Visualizing and Understanding Convolutional Networks -- This goes back to 2013 already, but I recently came across these images, and they are so mesmerizing I had to archive them.

The images pasted here are from a paper about convolutional neural networks. The researchers are able to train a network with tons of images, and then ask that network to classify new images it hasn't seen yet. Getting the detection error rate down to zero is the goal. This one does a good job.

But what makes this report special, is that we get to see how the system spits back what the different "neurons" see. The network develops layers or clusters that recognize different things; some are good at low-level features like lines and edges, and some are good at high level things like "bicycles" or "origami." Together they learn how to see.


^This is the first layer, it sees angles and colors.


^This is the second. This one's getting more complicated patterns. Notice the similarities, but also the differences. Of the 3x3 sets, which one is not like the other? The network saw all of those as similar. The network says that those images sit close together in image-space (the total space of all possible images, or at least all the images it was trained on).

It's a game to try and figure out the common denominator. We don't really know the common denominator, and we can't know, because we're just not computers. This is why they call it "deep." In the middle of this network, deep in there, is a layer with information that is just too complex for us. Collapse 33,000 dimensions into 3, and now try and communicate features of the 33,000 using only those 3 points of information. Can't do it. That's why it's mysterious.


^Now the third layer. Ok, so I see the people, I see the barcode/text motif, the honeycomb/diamond clusters, even the lady bug and the tomato -- it's a stretch, but I get it. But some of these are just nuts. White lower right corners? It sees white lower right corners?


^Fourth layer. No idea what this thing is talking about.


^Fifth layer. Here we go, people, dogs, flowers, now it's all making sense. And that was the point of creating this network -- you give it any picture of a flower, no matter how weird of a picture, barely looks like a flower, and this network will recognize it and classify it as a flower. (Mostly; it's not perfect.)

But there comes a point in the middle there, we have no idea what this thing is thinking about. And we never will. Just like other people, and how we can never really know what someone else is thinking. Which is to say that the computers are now a lot more like other people than they ever used to be. They have become mysterious.

Notes:
Zeiler and Fergus, Visualizing and Understanding Convolutional Networks, 2013 [ZFNet].
https://arxiv.org/abs/1311.2901

No comments:

Post a Comment