Saturday, September 14, 2019

On the Brains of Machines


This picture is kind of like an infrared camera but for algorithms.

It's a heat map for the eyeballs of a computer; what is it looking at, what are its clues?

In this case, it's looking at the water, not at the ship, in order to identify the image as a ship. (We're also assigning agency to this thing, in case anyone's keeping track.)

Neural nets are a big deal these days, but they come with a new problem. We don't know what they're doing, because the thing that makes them so special is that they figure out their own algorithm. (Agency again.) Computer programmers are not writing the programs; the networks write the programs using trial and error. Machine Learning is another name for this idea of iterative development.

There's a lot of people who would like to know what's going on in there, mostly to see how these things are getting their answers, and to make sure that the algorithms don't cheat to get their answers. Some learn bad habits, like detecting "ships" in pictures with water (which means they're good at detecting water, not ships), or by skimming metadata, which means they're good at classifying metadata, not pictures of stuff. These heat maps, and more importantly the forensics-like algorithms that inform them, are very helpful. They let us see inside the brains of the machine.

***
Speaking of disembodied brains, here's the artificial synapse. It uses a new type of hardware memory system that works more like a brain does, in an array, where they can do their computing business simultaneously. Neuromorphic computing.

And if you want to grow those artificial synapses in a 3-D tissue culture (brains in a dish), call these guys.

Cerebral organoids -- they're more for studying how the brain works than they are about making artificial brains. At least they're not using human brains, right?

Wrong; there are ethical concerns that these organoids might develop consciousness, or have already developed consciousness. 

Notes:

What is it like being a brain in a computer?
Clarifying how artificial intelligence systems make choices
Mar 2019, phys.org

Sebastian Lapuschkin et al, Unmasking Clever Hans predictors and assessing what machines really learn, Nature Communications (2019). DOI: 10.1038/s41467-019-08987-4

Fast, efficient and durable artificial synapse developed
Apr 2019, phys.org

Elliot J. Fuller et al. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing, Science (2019). DOI: 10.1126/science.aaw5581

Researchers grow active mini-brain-networks
Jun 2019, phys.org

Stem Cell Reports, Sakaguchi et al.: "Self-organized synchronous calcium transients in a cultured human neural network derived from cerebral organoids"
https://www.cell.com/stem-cell-reports/fulltext/S2213-6711(19)30197-3
DOI: 10.1016/j.stemcr.2019.05.029

On Free Will, Decision Making and Sovereign Awareness


Let's start here:
Our brains reveal our choices before we're even aware of them, study finds
Mar 2019, phys.org

Our thoughts can be predicted 11 seconds in advance by looking at patterns in brain activity.

"We believe that when we are faced with the choice between two or more options of what to think about, non-conscious traces of the thoughts are there already, a bit like unconscious hallucinations," Professor Pearson says.

"As the decision of what to think about is made, executive areas of the brain choose the thought-trace which is stronger. In, other words, if any pre-existing brain activity matches one of your choices, then your brain will be more likely to pick that option as it gets boosted by the pre-existing brain activity."
-Professor Joel Pearson, Director of the Future Minds Lab at UNSW School of Psychology

*note, researchers caution against assuming that all choices are by nature predetermined by pre-existing brain activity.
Roger Koenig-Robert et al. Decoding the contents and strength of imagery before volitional engagement, Scientific Reports (2019). DOI: 10.1038/s41598-019-39813-y

Aside from the idea of biases in general, This all reminds me of another article on digital telepathy:

A bunch of people each have a tetris game controller with only one button, so that one person's button rotates, the other slides sideways, etc. Together they have to make decisions by consensus, verbally, while they collaboratively control the tetris-block.

The division of labor is ruthless (only one button) but the communication is rich (human speech). The result is a complex procedure that has been stripped-down to operate in a digitally-mediated environment.

Then there's the straight telepathy style collaboration where they bypass the verbal communication and go straight to brain waves:
How you and your friends can play a video game together using only your minds
July 2019, University of Washington News

A University of Washington team is doing telepathinc collective problem-solving. It's called BrainNet. Three people play a Tetris by talking to each other with their brain waves and wireless signal.
As in Tetris, the game shows a block at the top of the screen and a line that needs to be completed at the bottom. Two people, the Senders, can see both the block and the line but can’t control the game. The third person, the Receiver, can see only the block but can tell the game whether to rotate the block to successfully complete the line.

Each Sender decides whether the block needs to be rotated and then --passes that information from their brain, through the internet and to the brain of the Receiver.-- Then the Receiver processes that information and sends a command — to rotate or not rotate the block — to the game directly from their brain, hopefully completing and clearing the line.  
The screen also showed the word “Yes” on one side and the word “No” on the other side. Beneath the “Yes” option, an LED flashed 17 times per second. Beneath the “No” option, an LED flashed 15 times a second. Once the Sender makes a decision about whether to rotate the block, they send ‘Yes’ or ‘No’ to the Receiver’s brain by concentrating on the corresponding light [which then sends frequency-specific signal downstream].
-University of Washington
If we take this splintered form of decision-making, and combine it with the fact that we don't seem to be making decisions in the way that we think we are (the decision is already made seconds before we realize it), then it would be expected that as we get better at collaborating and complexifying our distributed cognition network, we will have robots, i.e., artificially intelligent entities, helping us, and becoming part of us.

Scale this up and imagine 700 people collectively coordinating a robot's movements, but not just one robot, hundreds and thousands. All semibots, no more line between us.



Notes:
We have come a long way since Emotiv's EPOC headset almost a decade ago; just imagine 2030.

Try Not to Think
Network Address, 2017

All Your Brain Are Belong To Us
Network Address, 2012

Playing Tetris by committee
May 2019, BBC

^Developed by Patrick Lemieux of UC Davis, California, the Octopad single button controllers mean each player can only trigger one kind of movement in the game so it forces co-operation and conversation between players.

Post Script:
Shared control allows a robot to use two hands working together to complete tasks
May 2019, phys.org
A team of researchers from the University of Wisconsin and the Naval Research Laboratory has designed and built a robotic system that allows for bimanual robot manipulation through shared control.... a technique that enabled a robot to carry out bimanual tasks by sharing control with a human being. ... The robot did not progress to the point of performing the task on its own—instead, it learned to serve as a more fully capable augmented assistant.
Pedestrians at crosswalks found to follow the Levy walk process
Apr 2019, phys.org

As people cross an intersection, they interface each other in predictable ways. 

"Rather than people continually meeting face to face, walkers would simply follow a person moving in the same direction, preventing the constant need to shift their path. ... Doing so increased efficiency both for the individuals and for the crowd as a whole."

They also found that these streams followed a Lévy process.

...
The Lévy walk process is a mathematical description, which means it's predictable. It says that as you walk, or as your eyes dart across a screen, or as you do a whole bunch of repetitive actions, you will move in short stops interspersed with long stops. Many short strides intermittent with some long strides. But the ratio of short to long, and the distances of each, are determined by a power law distribution that is the Lévy process. That our walking follows a Lévy process means we can predict how many steps you will take as you cross a given intersection.

And if you happen to be walking funny because you have a shotgun in your trousers, that can now be recognized by a persistent surveillance system, to either alert in advance of atypical behavior, or to aid in identifying individuals of interest in footage of an event after it has taken place.