Tuesday, April 22, 2025

Neurodegenerative Mimesis


For a fine specimen of Deep-Dream-era hallucinations, look no further than the image placed above. This is an image generated from Stability 1.5 and retrieved on the Lexica library. Prompting something about "investment research".

It's a great example of the kinds of things that can go wrong, and the subtle ways a sophisticated technology can betray its true self (self). 

We see a commensurate attempt to portray a formal document, slightly yellow, maybe manilla, looks more official that way, and it's got tiny black characters printed on it, arranged in tables and grids and columns. Can't tell if they're numbers or letters or even which alphabet. But it does look official. Something is highlighted in red. A pair of thin-rimmed glasses rests on the page, next to a red fountain pen.

But look any further and things get weird as hell. And if you were looking for an image imitating "investment research", you may want to stop here, because you probably have what you want; the passing glance will see all this exactly as it was intended (intended). 

The obsessed do not stop there, however. A few more minutes of inspection takes us deep into the  world of megadata hallucinations. The yellowed paper appears at first to have slight wrinkles, like maybe the visual-artifact of a billion pictures of buried treasure maps, yet the paper has no wrinkles; they're intimated by minor changes in shadows across the surface of the page, and in the wavy orientation of the words, but look carefully and you can see how the shadows and the word-waviness don't match up.

The eyeglasses, thoroughly convincing for about 3 seconds, are completely deformed, like they've been involved in a horrific car accident. In fact, they are so smack in the middle of 'thoroughly convincing' and 'completely deformed' that I think I'm the one hallucinating.

The fountain pen is positioned so it shares a contour with the eyeglasses, and now, as we inspect a bit further, at the edge where the two meet, we can't tell which is which - am I seeing a clip attached to the edge of the pen, or is that the frame of the glasses? (It's both actually.) The shadow cast by the pen is too much red and not enough black, and we think maybe it's because the pen is slightly translucent, but then it can't be, because the highlights on the top are too strong for what should then be a transparent pen. And, is that what I think it is - yes, there's highlights on the shadow. Hold on, now I'm not even sure if this is a pen. 

The part that really gets me is the thing that's been highlighted. I mean, in what world do we first highlight something but then fastidiously outline with fine ink pen the shape of the highlighted mark itself - like that's a distraction from whatever is being highlighted. And the way it's being outlined, a jittery line, half Matisse, half Rheumatoid arthritis. Definitely getting cross-contaminated by buried treasure maps. Where they intersect is the grids in and around the red highlighted area, as they shift from excel spreadsheet to organic, three-dimensional cross-contouring grids, and back again. 

Each one of these pictures contains in it a training set of hundreds of millions of images, all bubbling right underneath the surface, and if you look just a few seconds longer than you're supposed to (supposed to), you can see the dreams of an entire civilization, all at once. 


Post Script:
The National Institute for Occupational Safety and Health (NIOSH) says that you are useless after 14 hours of work - you become so prone to mistakes that you're better off not working at all.

Robots don't get tired, so they can work forever. But something happens when the robots we're talking about are doing not physical work, but a kind of cognitive labor. There's plenty of examples of "model collapse", where a model is fed a steady diet of another model's output, instead of human generated output, which sounds like cannibalism, and we all know that cannibalism is bad. Sometimes they even feed it "synthetic data", which sounds a lot like artificial meat, and which also sounds just as bad as cannibalism, except maybe it's the inverse of that. 

It didn't take long for us to figure out this broken data diet would have negative effects on model output (from Oxford's OATML 2024, also this); it leads to a model that acts like it has dementia. Now I don't know about you, but I've always wanted to know what it's like to have dementia but without actually having dementia, and now here it is. 

No comments:

Post a Comment