Saturday, September 14, 2019

On the Brains of Machines

This picture is kind of like an infrared camera but for algorithms.

It's a heat map for the eyeballs of a computer; what is it looking at, what are its clues?

In this case, it's looking at the water, not at the ship, in order to identify the image as a ship. (We're also assigning agency to this thing, in case anyone's keeping track.)

Neural nets are a big deal these days, but they come with a new problem. We don't know what they're doing, because the thing that makes them so special is that they figure out their own algorithm. (Agency again.) Computer programmers are not writing the programs; the networks write the programs using trial and error. Machine Learning is another name for this idea of iterative development.

There's a lot of people who would like to know what's going on in there, mostly to see how these things are getting their answers, and to make sure that the algorithms don't cheat to get their answers. Some learn bad habits, like detecting "ships" in pictures with water (which means they're good at detecting water, not ships), or by skimming metadata, which means they're good at classifying metadata, not pictures of stuff. These heat maps, and more importantly the forensics-like algorithms that inform them, are very helpful. They let us see inside the brains of the machine.

Speaking of disembodied brains, here's the artificial synapse. It uses a new type of hardware memory system that works more like a brain does, in an array, where they can do their computing business simultaneously. Neuromorphic computing.

And if you want to grow those artificial synapses in a 3-D tissue culture (brains in a dish), call these guys.

Cerebral organoids -- they're more for studying how the brain works than they are about making artificial brains. At least they're not using human brains, right?

Wrong; there are ethical concerns that these organoids might develop consciousness, or have already developed consciousness. 


What is it like being a brain in a computer?
Clarifying how artificial intelligence systems make choices
Mar 2019,

Sebastian Lapuschkin et al, Unmasking Clever Hans predictors and assessing what machines really learn, Nature Communications (2019). DOI: 10.1038/s41467-019-08987-4

Fast, efficient and durable artificial synapse developed
Apr 2019,

Elliot J. Fuller et al. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing, Science (2019). DOI: 10.1126/science.aaw5581

Researchers grow active mini-brain-networks
Jun 2019,

Stem Cell Reports, Sakaguchi et al.: "Self-organized synchronous calcium transients in a cultured human neural network derived from cerebral organoids"
DOI: 10.1016/j.stemcr.2019.05.029

On Free Will, Decision Making and Sovereign Awareness

Let's start here:
Our brains reveal our choices before we're even aware of them, study finds
Mar 2019,

Our thoughts can be predicted 11 seconds in advance by looking at patterns in brain activity.

"We believe that when we are faced with the choice between two or more options of what to think about, non-conscious traces of the thoughts are there already, a bit like unconscious hallucinations," Professor Pearson says.

"As the decision of what to think about is made, executive areas of the brain choose the thought-trace which is stronger. In, other words, if any pre-existing brain activity matches one of your choices, then your brain will be more likely to pick that option as it gets boosted by the pre-existing brain activity."
-Professor Joel Pearson, Director of the Future Minds Lab at UNSW School of Psychology

*note, researchers caution against assuming that all choices are by nature predetermined by pre-existing brain activity.
Roger Koenig-Robert et al. Decoding the contents and strength of imagery before volitional engagement, Scientific Reports (2019). DOI: 10.1038/s41598-019-39813-y

Aside from the idea of biases in general, This all reminds me of another article on digital telepathy:

A bunch of people each have a tetris game controller with only one button, so that one person's button rotates, the other slides sideways, etc. Together they have to make decisions by consensus, verbally, while they collaboratively control the tetris-block.

The division of labor is ruthless (only one button) but the communication is rich (human speech). The result is a complex procedure that has been stripped-down to operate in a digitally-mediated environment.

Then there's the straight telepathy style collaboration where they bypass the verbal communication and go straight to brain waves:
How you and your friends can play a video game together using only your minds
July 2019, University of Washington News

A University of Washington team is doing telepathinc collective problem-solving. It's called BrainNet. Three people play a Tetris by talking to each other with their brain waves and wireless signal.
As in Tetris, the game shows a block at the top of the screen and a line that needs to be completed at the bottom. Two people, the Senders, can see both the block and the line but can’t control the game. The third person, the Receiver, can see only the block but can tell the game whether to rotate the block to successfully complete the line.

Each Sender decides whether the block needs to be rotated and then --passes that information from their brain, through the internet and to the brain of the Receiver.-- Then the Receiver processes that information and sends a command — to rotate or not rotate the block — to the game directly from their brain, hopefully completing and clearing the line.  
The screen also showed the word “Yes” on one side and the word “No” on the other side. Beneath the “Yes” option, an LED flashed 17 times per second. Beneath the “No” option, an LED flashed 15 times a second. Once the Sender makes a decision about whether to rotate the block, they send ‘Yes’ or ‘No’ to the Receiver’s brain by concentrating on the corresponding light [which then sends frequency-specific signal downstream].
-University of Washington
If we take this splintered form of decision-making, and combine it with the fact that we don't seem to be making decisions in the way that we think we are (the decision is already made seconds before we realize it), then it would be expected that as we get better at collaborating and complexifying our distributed cognition network, we will have robots, i.e., artificially intelligent entities, helping us, and becoming part of us.

Scale this up and imagine 700 people collectively coordinating a robot's movements, but not just one robot, hundreds and thousands. All semibots, no more line between us.

We have come a long way since Emotiv's EPOC headset almost a decade ago; just imagine 2030.

Try Not to Think
Network Address, 2017

All Your Brain Are Belong To Us
Network Address, 2012

Post Script:
Shared control allows a robot to use two hands working together to complete tasks
May 2019,
A team of researchers from the University of Wisconsin and the Naval Research Laboratory has designed and built a robotic system that allows for bimanual robot manipulation through shared control.... a technique that enabled a robot to carry out bimanual tasks by sharing control with a human being. ... The robot did not progress to the point of performing the task on its own—instead, it learned to serve as a more fully capable augmented assistant.
Pedestrians at crosswalks found to follow the Levy walk process
Apr 2019,

As people cross an intersection, they interface each other in predictable ways. 

"Rather than people continually meeting face to face, walkers would simply follow a person moving in the same direction, preventing the constant need to shift their path. ... Doing so increased efficiency both for the individuals and for the crowd as a whole."

They also found that these streams followed a Lévy process.

The Lévy walk process is a mathematical description, which means it's predictable. It says that as you walk, or as your eyes dart across a screen, or as you do a whole bunch of repetitive actions, you will move in short stops interspersed with long stops. Many short strides intermittent with some long strides. But the ratio of short to long, and the distances of each, are determined by a power law distribution that is the Lévy process. That our walking follows a Lévy process means we can predict how many steps you will take as you cross a given intersection.

And if you happen to be walking funny because you have a shotgun in your trousers, that can now be recognized by a persistent surveillance system, to either alert in advance of atypical behavior, or to aid in identifying individuals of interest in footage of an event after it has taken place. 

Thursday, August 8, 2019

In The Fractal Closet

Behold the Mandelbrot Set, a world-renowned image and a powerful symbol of infinity. The mathematical concept it illustrates is one of the most intuitive --it describes self-similarity-- and yet it remained unknown, even unbeckoned and unsought-after, until the advent of the modern computer. This dormant formula only came to life after iterations so numerous as to be considered infinite were the human hand the one computing. The computer showed us what a simple formula could do, if you scale it up. It shows us a behavior, not of things, but of space-time itself.

I wake up to the sound of rain, splattering. Then I wake up some more, and realize it's not rain.

It's coming from the ceiling. The plumbing. One hundred years of building and habitating and changing and changing has left me with an unfortunate design that is now leaking water from a ruptured pipe above the kitchen ceiling, and into my pantry closet.

Now I am fully awake, and fully out of bed, and pulling out all my things from the closet. I am transporting a 3x3' closet's worth of possessions to a 10x10' back room, and hastily.

The plumber has come and gone, and the handyman who repaired the closet. It's time to replace my things, to fill the closet again. But all I can do is stare at the sprawling piles of stuff, that was once compacted, compressed into a 3x3' closet. How did all this stuff possibly fit in that little closet. And it occurs to me -- this is fractals.

We are all familiar with the general idea of fractals, the Mandelbrot set, the self-similarity, the LSD. But that intuitively recognizable feature is an outcome of an underlying objective. The reason a fractal looks the way it does is because of its space-filling behavior, which itself is a function of growth limited by space. In order to keep jamming more and more stuff into that space, you have to follow the fractal formula.

A better example of fractals is not a tie-dye t-shirt but the coast of England. If you were to measure the coastline of England with a one-mile long measuring stick, it would be a pretty vague approximation of the coastline, but with a defined length.

Then if you were to measure with a one-foot stick (which would be ironic), you would get a much better approximation, but also a much larger coastline, because now that your shorter imperial stick can reach into all the nooks and crannies, it makes the total length that much longer. In fact, the smaller your measuring stick, the longer the coastline.

By this reasoning, the length of the coastline is infinite. In other words, it is not a 1-D line at all. Yet neither is it 2-D. It is 1.456-D, or maybe 1.879-D; it is a fraction of a dimension.

When you fill a closet with things, it doesn't just happen all at once. Sure, you start by "filling" the closet. But over time, as you use the things and remove things, add more things, and rearrange, you are filling the space more and more. But you're not just filling it with things now, you're filling it with intelligence.

The more time goes by, and the more you use things, remove, add and rearrange, you are going to fill all the nooks and crannies of that 3-D space until it is no longer 3-D. It becomes a fraction of a dimension.

Then, when you take everything back out, you collapse the extra fraction that you helped to create. When the things come back into normal 3-D space, they seem bigger in aggregate, they seem to have gained size in the process. The closet is now sprawling across an entire room. That difference is fractals. (It's also because the 3-D closet is now spread across a 2-D floor; but that doesn't account for all of it's 'enlargement', as the same phenomenon is experienced with filling a box truck on moving day).

If we could extrapolate this to the 4th dimension, what would we be talking about? Or do I need to be on acid to have that conversation. Maybe an easier question would be -- what does the airtight Tetris block have that the jumbled pile of pieces does not? (Well, it doesn't have air, obviously.) But besides that, it's the entropy. The block is ordered, and the pile disordered. The pile is a random mess, and the block an intelligent artifact.

I hadn't thought of my catch-all pantry closet as an intelligent artifact, yet here it is, an entropy-reversing portal that uses intelligence to loop out of it's limited dimensions.

Post Script:
It is still hard to see the pantry closet as having something to do with intelligence. Try taking it out and putting it all back so it fits. Then you'll see how much "intelligence" went into its arrangement. The difference is that in its natural state, the closet possesses an accumulated intelligence. Over time, as you use all the things in the closet, your intelligent behavior leaves its residue on the things in it. It is a storage depot, not of things, but of an arrangement.

Post Post Script:
[I start looking up these deepdream images and I have to post them all.]

[can't find source bc pinterest; thanks obama]

(Virtual Art) by Rein Bijlsma

Deep Dream Burger by Matthias Hauser

Style Transfer, which is not the same as DeepDream, but does use neural nets, i.e., robot brains. Link.

Wednesday, August 7, 2019

For Real Though

aka Full Meta
I know this is old news, but we have to document these things for posterity -- The real President of the United States verifies the authenticity of his twitter account by calling it The Real [President of the United States].

Image source: The Real Seal
Effective March 15, 2012, the management of the REAL® Seal program was transferred from the United Dairy Industry Association to National Milk Producers Federation. This transfer is symbolic of the renewed purpose. From helping consumers discriminate real from fake cheese on pizza, the new purpose of Real dairy products distinguishes animal milk from soy, rice and nut milk. (But nobody wants to say nut milk.)

Saturday, July 27, 2019

Empathic Intelligence and Digital Feelings

Making progress in the transition to cybernetic psycho-oppression, the Endocorporeal Datavore has released an ethical tribune to tame the wave of intelligentities at our doorstep.

In other words:
Google announces AI ethics panel
Mar 2019, BBC News
In a highly-cited thesis entitled Robots Should Be Slaves, Ms Bryson argued against the trend of treating robots like people.
"In humanising them," she wrote, "we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility.
-Joanna Bryson
On the face of it, I think in the complete opposite direction. I think we need to be empathic to robots, because we already see them as people. Or to be more precise, we already personify them. Granted, it's the same as we do with a washing machine or even a car, but with robots it's different. We are making them in our own image, after all. We couldn't help that if we tried.

When that public service robot (DC?) fell into the pond, the articles joked that he offed himself on the first day on the job. Too stressful. How is that a good thing to model to those who look up to us, to poke fun at someone who has killed themselves. We know it's a robot, but kids don't know that. They don't know what anything means because we have to tell them first, to show them first. And if we treat things like crap, whether it's the washing machine or a pair of flip-flops, then they will treat things like crap too.

Not to mention, we're only a couple generations away from being robots ourselves. Prosthetic retinas, cochlear implants, pacemakers, exoskeletons. Did anyone not buy into the 'your cell phone is your exocortex' line by Jason Silva? We're already robots to some degree.

It is definitely good to hear what sounds to me like totally wrong and crazy talk, because this is an ethics panel, and you want to hear everything out there. We're still in such early phases of this, we want to shape the conversation to be as big as possible at this point. And I am pretty sure that Bryson's argument is one that needs to be digested at length, and not just from a small bite (she was chosen to be on this panel for a reason), so I look forward to getting more into it.

And while we're on the topic of empathic intelligence and robot feelings:
Britain's 'bullied' chatbots fight back
Mar 2019, BBC News

Service bots (chatbots) get abused. And I feel bad for them as I read this article.

Those who do research on these kinds of things say that humans are never-not going to test the boundaries. Like a child with their parent or a student and their teacher, we will always test the limits of another person. We do it to everyone in every relationship, but it's the asymmetrical ones where it's most evident (where one person has way more power than the other).

In the case of a chatbot, we also just want to test the believability. Sure they may have programmed this thing to help me return a defective dehumidifier, but did they program it to tell me to f*** off when I give it a hard time? How real is this thing?

Plum, a service chatbot who wants us to think xe's very real, is now programmed to respond: "I might be a robot but I have digital feelings. Please don't swear."

Digital. Feelings.

Wednesday, July 24, 2019

Misinformation Networks

Entire networks, vast, deep, and coordinated. Not to imply conspiracy theories; they aren't that coordinated. But they are big, and myriad.

A description of how one of these networks operates:
"The group was able to gain followers by setting up innocent-looking pages and groups. It later renamed them, and started posting politically-motivated content...including topics like immigration, free speech, racism, LGBT issues, far-right politics, issues between India and Pakistan, and religious beliefs including Islam and Christianity."

"We're taking down these pages and accounts based on their behaviour, not the content they posted."
-Facebook via BBC
They call it "co-ordinated inauthentic behaviour". I don't recall hearing this as part of the story of digital surveillance in general, but it sounds like an improvement. (Well, there's articles dating back to Nov 2018 at least, where this term is used.) Behavior is data-rich, and makes Content look one-dimensional in contrast. Coordinated behavior is even more data-rich, because we're talking about the behavior of people in the context of other people.

-image source: MIT

Facebook finds UK-based 'fake news' network
Mar 2019, BBC News

Quantum Futures

It's been a while since we saw "Wegman Reading Two Books". This is one of my favorite pictures ever; I thought it made sense here.
I'm reading about quantum computers on the commute home from work. It's not the computers I'm interested in, and it's not the quantum part either. I'm not a computer scientist, or a quantum scientist. But I do like to think about the future, and if anything says "future", it's quantum computers.

There's a strong theme running in these articles though, the kind of thing I look for in articles about technology. In this case, the problem with quantum computing is that it works so differently from a classical computer, that we kind of don't even know what to do with it.

When tasked with devising a test to evaluate the performance of a quantum computer, a moment is revealed:
"I quickly ran into trouble trying to figure out how to run these [scripts] on the D-Wave machine. You need a huge shift in the way you think about problems, and I am a very straightforward thinker."
-Chris Lee for Ars Technica, 2019
It's a casual statement in a casual popular science article, but it's the unassuming bits like this that may be revealed as prescient in years to come. Like in 1890, "Why the heck would you want to screw anything else besides a lightbulb into this electrical light bulb socket??"
It's true, stuff like washing machines would be plugged into the 'light socket' via a cord designed specifically for a light socket; the dual-pronged plug that we are used to, in the US at least, didn't come until much later, and required a drastic paradigm shift in order to see electricity as serving anything but electric light.

Back to quantum computers. Things like this, when the technology forces us to reshape not only our thinking and not only the answers, but more importantly the questions that we ask, and the way we ask, and then ultimately the things we want and the things we care about, are what Kuhn was writing about in his Structures of Scientific Revolution.

D-Wave 2000Q hands-on - Steep learning curve for quantum computing
Mar 2019, Ars Technica

Post Script:
The Structure of Scientific Revolutions
Thomas Kuhn, 1962

Science does not progress via linear accumulation of new knowledge, but undergoes periodic revolutions, also called "paradigm shifts" in which the nature of scientific inquiry within a particular field is abruptly transformed.

Thomas Kuhn - the man who changed the way the world looked at science
John Naughton, The Guardian, 2012