Let's start here:
Our brains reveal our choices before we're even aware of them, study finds
Mar 2019, phys.org
Our thoughts can be predicted 11 seconds in advance by looking at patterns in brain activity.
"We believe that when we are faced with the choice between two or more options of what to think about, non-conscious traces of the thoughts are there already, a bit like unconscious hallucinations," Professor Pearson says.
"As the decision of what to think about is made, executive areas of the brain choose the thought-trace which is stronger. In, other words, if any pre-existing brain activity matches one of your choices, then your brain will be more likely to pick that option as it gets boosted by the pre-existing brain activity."
-Professor Joel Pearson, Director of the Future Minds Lab at UNSW School of Psychology
*note, researchers caution against assuming that all choices are by nature predetermined by pre-existing brain activity.
Roger Koenig-Robert et al. Decoding the contents and strength of imagery before volitional engagement, Scientific Reports (2019). DOI: 10.1038/s41598-019-39813-y
Aside from the idea of biases in general, This all reminds me of another article on digital telepathy:
A bunch of people each have a tetris game controller with
only one button, so that one person's button rotates, the other slides sideways, etc. Together they have to make decisions by consensus, verbally, while they collaboratively control the tetris-block.
The division of labor is ruthless (only one button) but the communication is rich (human speech). The result is a complex procedure that has been stripped-down to operate in a digitally-mediated environment.
Then there's the straight telepathy style collaboration where they bypass the verbal communication and go straight to brain waves:
How you and your friends can play a video game together using only your minds
July 2019, University of Washington News
A University of Washington team is doing telepathinc collective problem-solving. It's called BrainNet. Three people play a Tetris by talking to each other with their brain waves and wireless signal.
As in Tetris, the game shows a block at the top of the screen and a line that needs to be completed at the bottom. Two people, the Senders, can see both the block and the line but can’t control the game. The third person, the Receiver, can see only the block but can tell the game whether to rotate the block to successfully complete the line.
Each Sender decides whether the block needs to be rotated and then --passes that information from their brain, through the internet and to the brain of the Receiver.-- Then the Receiver processes that information and sends a command — to rotate or not rotate the block — to the game directly from their brain, hopefully completing and clearing the line.
The screen also showed the word “Yes” on one side and the word “No” on the other side. Beneath the “Yes” option, an LED flashed 17 times per second. Beneath the “No” option, an LED flashed 15 times a second. Once the Sender makes a decision about whether to rotate the block, they send ‘Yes’ or ‘No’ to the Receiver’s brain by concentrating on the corresponding light [which then sends frequency-specific signal downstream].
-University of Washington
If we take this splintered form of decision-making, and combine it with the fact that we don't seem to be making decisions in the way that we think we are (the decision is already made seconds before we realize it), then it would be expected that as we get better at collaborating and complexifying our distributed cognition network, we will have robots, i.e., artificially intelligent entities, helping us, and becoming part of us.
Scale this up and imagine 700 people collectively coordinating a robot's movements, but not just one robot, hundreds and thousands. All semibots, no more line between us.
Notes:
We have come a long way since Emotiv's
EPOC headset almost a decade ago; just imagine 2030.
Try Not to Think
Network Address, 2017
All Your Brain Are Belong To Us
Network Address, 2012
Playing Tetris by committee
May 2019, BBC
^Developed by Patrick Lemieux of UC Davis, California, the Octopad single button controllers mean each player can only trigger one kind of movement in the game so it forces co-operation and conversation between players.
Post Script:
Shared control allows a robot to use two hands working together to complete tasks
May 2019, phys.org
A team of researchers from the University of Wisconsin and the Naval Research Laboratory has designed and built a robotic system that allows for bimanual robot manipulation through shared control.... a technique that enabled a robot to carry out bimanual tasks by sharing control with a human being. ... The robot did not progress to the point of performing the task on its own—instead, it learned to serve as a more fully capable augmented assistant.
Pedestrians at crosswalks found to follow the Levy walk process
Apr 2019, phys.org
As people cross an intersection, they interface each other in predictable ways.
"Rather than people continually meeting face to face, walkers would simply follow a person moving in the same direction, preventing the constant need to shift their path. ... Doing so increased efficiency both for the individuals and for the crowd as a whole."
They also found that these streams followed a Lévy process.
...
The Lévy walk process is a mathematical description, which means it's predictable. It says that as you walk, or as your eyes dart across a screen, or as you do a whole bunch of repetitive actions, you will move in short stops interspersed with long stops. Many short strides intermittent with some long strides. But the ratio of short to long, and the distances of each, are determined by a power law distribution that is the Lévy process. That our walking follows a Lévy process means we can predict how many steps you will take as you cross a given intersection.
And if you happen to be walking funny because you have a shotgun in your trousers, that can now be recognized by a persistent surveillance system, to either alert in advance of atypical behavior, or to aid in identifying individuals of interest in footage of an event after it has taken place.