Thursday, September 23, 2021

Deep Kuhn


Research papers that omit 'mice' from titles receive misleading media coverage
Jun 2021, phys.org

When authors of scientific papers omit the basic fact that a study was conducted in mice (and not in humans) from the article title, journalists reporting on the paper tend to do the same.

via Public Library of Science: Triunfol M, Gouveia FC (2021) What's not in the news headlines or titles of Alzheimer disease articles? #InMice. PLoS Biol 19(6): e3001260. doi.org/10.1371/journal.pbio.3001260

Thomas Kuhn wrote something about the progress of science, and how it changes by abrupt paradigm shifts and not as a series of incremental changes. Way before that, Ludwig Fleck wrote about the life cycle of ideas and how they are transmitted among scientists and other human brains. He created the term "thought collective" and wrote Genesis and Development of a Scientific Fact in 1935, although it wasn't translated to English until 1979, after Kuhn made his meteoric impact. I'm not exactly sure what any of this has to do with the way scientists title their papers, but I think it does. Language, like Science, is a lifeform, and behaves as such in the artificial arena of the human mind. 

Image credit: (This is what happens when you do an image search for "Deep Kuhn") From the company called Kuhn Farm Machinery: The 4000 Chisel Plow provides durable, economical compaction removal with a variety of shanks and point options to meet different requirements. This promotes breakdown of crop residue and allows good root development for the next crop.

Mostly Unrelated Post Script:
Scientists rename human genes to stop Microsoft Excel from misreading them as dates
Aug 2020, The Verge

HUGO (Human Genome Organisation) Gene Nomenclature Committee (HGNC), the body that names genes, has changed 27 genes to avoid being confused by Excel's default naming protocols.

For example, SEPT2 is the short name of a gene called Septin 2, but it's also the name of September 2nd, if you speak Excel. A report from 2016 found 20% of papers with these errors. Genomic data is already big data, so when you factor this erroneous name conversion, we call that dirty data.  

Gene name errors are widespread in the scientific literature. Mark Ziemann et al., Genome Biology, 2016. https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-1044-7

Friday, September 17, 2021

What Were You Thinking


Brains are complicated, and no matter how much we think we know, there always seems to be more. 

Do we know what we want in a romantic partner - No more than a random stranger would, study says
Jul 2020, phys.org

This is similar to the horoscope joke where a sneaky professor distributes a customized personality profile to each student based on their birthdays (which he looked up and printed out in advance of this class), and then asks the students to score the profiles based on their accuracy. But then it turns out that he passed out the same profile to everyone, and it was written in a way that was  ambiguous enough to apply to everyone (and also flattering enough to disarm your bullshit detector),  so everyone scores the profile really high, like, "wow, this totally describes me perfectly!"

It's called the Barnum effect, and it was named after the circus director, by a psychologist, to describe the "pseudo-successful" effects of certain psychological tests.

Here's some examples, called Barnum statements ,and taken from Bertram Forer, the guy who made this famous. Maybe you can start your own horoscope column, or use them on your next date:

  • You have a great need for other people to like and admire you.
  • You have a tendency to be critical of yourself.
  • You have a great deal of unused capacity which you have not turned to your advantage.
  • While you have some personality weaknesses, you are generally able to compensate for them.
  • Your sexual adjustment has presented problems for you.
  • Disciplined and self-controlled outside, you tend to be worrisome and insecure inside.
  • At times you have serious doubts as to whether you have made the right decision or done the right thing.
  • You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations.
  • You pride yourself as an independent thinker and do not accept others' statements without satisfactory proof.
  • You have found it unwise to be too frank in revealing yourself to others.
  • At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved.
  • Some of your aspirations tend to be pretty unrealistic.
  • Security is one of your major goals in life.

Human brain replays new memories at 20 times the speed during waking rest
Jun 2021, phys.org

The fact that we can see "neural replays" is nuts, no?

Also note that this 20x replay speed happens as an essential part of learning, and takes place particularly during breaks from learning. 

via National Institute of Neurological Disorders and Stroke: Cell Reports, Buch et al.: "Consolidation of human skill linked to waking hippocampo-neocortical replay" DOI: 10.1016/j.celrep.2021.109193

A redundant modular network supports proper brain communication
Jul 2021, phys.org

The truth about how memory works keeps getting more and more complicated: "Li and his colleagues were able to see that each hemisphere of the brain has a separate representation of a memory."

And because this is the phrase I'm looking for every day as I read these news briefs: "What they found was unexpected." (see these two other recent examples, one where they found a bacteria that eats metal, and which was accidentally left lying around during the first wave lockdown, and this other one where they found out that electromagnetic waves cure cancer and diabetes because one scientist borrowed another scientist's mice, and he was working on EMF exposure, and she was working on blood sugar.)

via Baylor College of Medicine:  Guang Chen et al, Modularity and robustness of frontal cortical networks, Cell (2021). DOI: 10.1016/j.cell.2021.05.026

Image credit: Holocene - Paul Griffitts Fractal Forums - 2017

The Hidden Network


Study finds surprising source of social influence
Jul 2021, phys.org

Spreading ideas and spreading new ideas are two different things. I thought we knew this as far back as Granovetter's Weak Ties (1973). The people at the edge of the network are the ones who spread the novel information across the network, not the ones at the center of their social circles. Not trying to deflate the work of these researchers, just saying.

As prominent and revered as social influencers seem to be, in fact, they are unlikely to change a person's behavior by example—and might actually be detrimental to the cause.

[In other words, they are NOT influential? Interesting.]

"Dozens of algorithms that are currently used by enterprises seeking to spread new ideas are based on the fallacy that everything spreads virally," says Centola. "But this study shows that the ability for information to pass through a social network depends on what type of information it is."

So, if you want to spread gossip — easily digestible, uncontroversial bits of information — go ahead and tap an influencer. But if you want to transmit new ways of thinking that challenge an existing set of beliefs, seek out hidden locations in the periphery and plant the seed there.

via University of Pennsylvania: Nature Communications (2021). DOI: 10.1038/s41467-021-24704-6

In contrast to influencers, artists are known for being on the fringe of their networks, hence responsible for introducing novel information. This is also unfortunately the reason why so many artists get the socially awkward or anti-social label, and why so many need to rely on the beneficence of a Gertrude Stein or an agent or manager.


Thursday, September 16, 2021

Memetic Supremacy


From genes to memes - Algorithm may help scientists demystify complex networks
Jul 2021, phys.org

This is it.

I might be just now understanding that the reason we can't "do" memetics yet is because of computational power. Too many variables to model.

Keep in mind that fractals, one of the most ubiquitous phenomena there is, wasn't described mathematically until the 1980's, because that's when computers caught up to it. They needed to be able to run the same, simple algorithm hundreds of thousands of times in less than a lifetime in order to see any results, and this needed to wait for faster computers.

This time, the advancement comes in the form of Boolean networks. These networks aren't just about having lots of on/off switches, but about --networking-- all those switches. That's the hard part. Similar to fractals, which starts from a simple equation (Zn+1 = Zn2 + C), THIS is just a collection of nodes either on or off. Sounds simple, but it's what happens when it gets scaled-up to gives us the complexity of the Twittersphere, for example. A few nodes can generate millions of states.

And before we go any further, it should be noted that quantum computing will flip this entire paradigm upside down, and what today is impossible to even imagine will tomorrow be beamed to your brain via quantum cloud servers in outerspace faster than you can ask for it. 

Until then, the news here is that these networks are now being analyzed by these two methods:

1. Parity - making a mirror image of the network where all ON nodes are switched to OFF to identify critical subnetworks, and 

2. Time Reversal - to identify which network configurations precede which outcomes. 

Currently, they're working on networks of 16,000 genes, which is a lot more than we've ever done before. And currently, this work is to learn more about cancer cells, but soon enough we'll be modeling social uprisings and second-order psychological operations. 

via Pennsylvania State University, Broad Institute, Dana-Farber Cancer Institute, Semmelweis University, and Center for Complex Network Research: "Parity and time reversal elucidate both decision-making in empirical models and attractor scaling in critical Boolean networks" Science Advances (2021). https://advances.sciencemag.org/lookup/doi/10.1126/sciadv.abf8124

Post Script:
A new model enables the recreation of the family tree of complex networks
Jun 2021, phys.org

This new study analyzes the time evolution of the citation network in scientific journals and the international trade network over a 100-year period. According to M. Ángeles Serrano, ICREA researcher at UBICS, "What we observe in these real networks is that both grow in a self-similar way, that is, their connectivity properties remain invariable over time, so that the network structure is always the same, while the number of nodes increases."
-University of Barcelona: Muhua Zheng et al, Scaling up real networks by geometric branching growth, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2018994118

Also, don't forget this one, which aged well, very, very well:
Cyber Swarming, Memetic Warfare and viral Insurgency: How Domestic Militants Organize on Memes to Incite Violent Insurrection and Terror Against Government and Law Enforcement; A Contagion and Ideology Report. Alex Goldberg from The Network Contagion Research Institute, Joel Finkelstein from The Network Contagion Research Institute and The James Madison Program in American Ideals and Institutions at Princeton University. Rutgers Miller Center for Community Protection and Resilience. Feb 7 2020. [pdf link]

^Published February 7, 2020, you know, almost a year before January 6, 2021.

Cryptographic Carbon Markets


Using carbon is key to decarbonizing economy
Aug 2021, phys.org

Instead of burning the hydrocarbons for energy, split them into carbon nanotubes for a materials revolution, and hydrogen for a hydrogen power revolution. 

Graphene and Hydrogen factories for mining electronic money, if you don't think it's going to happen you're not trying hard enough.

(Granted, this project, at Rice University, is related to Shell, the oil company, who would really like to see a future where hydrocarbons are still really important. Then again, if carbon-based nanotubes could replace metals, which use a ton of energy to mine and process, that could be a benefit in itself?)

Post Script:
The Bitcoin saga continues to deliver:

Researchers found a seasonal movement of mining between Chinese provinces in response, it was suggested, to the availability of hydro-electric power.

Mining moved from the coal-burning northern province of Xinjiang in the dry season, to the hydro-abundant southern province of Sichuan in the rainy season.

China's ban on cryptocurrency mining has forced bitcoin entrepreneurs to flee overseas. Many are heading to Texas, which is quickly becoming the next global cryptocurrency capital.

"Bitcoin refugees" in the "Great Mining Migration" are looking for more relaxed digital currency policy, a stable regulatory environment, diverse sources of capital, and cheap electricity. (How about a reliable power grid though? Seriously?)

I hope people are taking notes, because the crypto saga is a playbook for the way things are going to be. Bitcoin is doing to the future of energy production and computation what the pandemic did to all things "remote" -- the work-from-home revolution would take another 10 years without the pandemic. 

With or without the climate apocalypse to speed things up, not only will renewable energy production decentralize and redistribute into a splintered scattering of nodes, but those nodes move with the weather. 

Tuesday, September 14, 2021

On Memory in Times Past


Ancient Australian Aboriginal memory tool superior to 'memory palace' learning
May 2021, phys.org

Imagine that.

Although this article is about how the Memory Palace technique of memorization is second place to the Aboriginal technique of attaching facts to the landscape, via a narrative, I found this passage interesting: "The memory palace technique dates back to the early Greeks and was further utilized by Jesuit priests. Handwritten books were scarce and valuable, and one reading would have to last a person's lifetime, so ways to remember the contents were developed."

via Monash University: PLOS One (2021)

The Fingerbot


Using the human hand as a powerless infrared radiation source
Apr 2021, phys.org

The future is so obvious once it happens.

The idea of your fingers manipulating computer screens floating in mid air seems pretty natural. I never thought of my finger as a remote control, but it does make perfect sense. This also reminds me of the Nintendo Wii hack from many years ago, where you place the Wii controller in a stationary position, and have it read an IR light pen as you draw on a surface with it. The light-point is followed by the Wii, and translated into lines on a projected screen. You use the Wii backwards to write on the projected surface in real time. Graffiti Research Labs took this a step further with their Laser Tag project. Now we can imagine doing that with our fingers. No Wii necessary, only some basic intelligence in your camera, and it will be able to learn how to see your fingers and take direction from them. 

via Shanghai Jiao Tong University: Shun An et al. Human hand as a powerless and multiplexed infrared light source for information decryption and complex signal generation, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2021077118

Image credit: It's a fingerbot, so you can inject an analog loop into your IoT wifi smart home mesh network. But they changed the name to switchbot; probably a good idea. 

Post Script:
Speaking of Laser Tag...
This is the EyeWriter, which allows paralyzed graffiti artists to write graffiti with their eyeballs.

And this is his graffiti, TEMPT circa 2009.

Wednesday, September 8, 2021

Artificial Intuition - It's What Computers Crave


AKA The Pendulum of the Anthropocene

Ten years ago I recall myself meta-presenting to my high school class Iain McGilchrist on the TED stage, via RSA Animate. He wrote a book called The Master and His Emissary: The Divided Brain and the Making of the Western World, about the left brain right brain dichotomy and about how the history of humans since the Enlightenment is a story of the shift from right-brained religious order to left-brained scientific discovery.

But I being an art teacher, and they being young people who had no reason to believe that the entire history of humankind is any indication of human futures, I speculated aloud, for their sake -- are we really destined to continue on this trajectory forever? What if the pendulum were to swing in the other direction? Could there be a world where the Right brain of feelings and emotions are more important than facts and data?

I didn't think what I was speculating could actually be true. My job was to ask the craziest questions imaginable, in the hopes of stretching the most malleable material on Earth -- the minds of young people. I myself could never imagine a world where the right brain became the dominant force. But I always held back my own biases, because one thing I was certain about was that the world these kids would grow up in would be very different from the world I grew up in, and they only way to prepare them was to forget everything I knew, and say nothing else but "what if".

And then it happened. Facts became optional. Computers became creative. Only 500 years after its appearance, Reason has begun to lose its appeal, and its utility. 

First, we see algorithms generating theories without any data:

The Ramanujan Machine - Researchers have developed a 'conjecture generator' that creates mathematical conjectures
Feb 2021, phys.org

"The Ramanujan Machine" generates conjectures without proving them, by "imitating" intuition using AI and considerable computer automation.

via Israel Institute of Technology: Gal Raayoni et al. Generating conjectures on fundamental constants with the Ramanujan Machine, Nature (2021). DOI: 10.1038/s41586-021-03229-4

And then, we see algorithms generating data without any theories:

New machine learning theory raises questions about nature of science
Feb 2021, phys.org

Instead of teaching the program the laws of physics, he just shows it all the orbits of all the planets until it can produce its own orbits. No more rules baby:

And he goes on: "I would argue that the ultimate goal of any scientist is prediction. You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don't need to know Newton's laws of gravitation and motion. I go directly from data to data.
-Hong Qin, physicist at the U.S. Department of Energy's Princeton Plasma Physics Laboratory

via Princeton Plasma Physics Laboratory: Hong Qin, Machine learning and serving of discrete field theories, Scientific Reports (2020). DOI: 10.1038/s41598-020-76301-0

Another one, we let the algorithms imagine, just to see what they come up with:

Enabling the 'imagination' of artificial intelligence
Jul 2021, phys.org

They're really good at synthesizing, but not at creating imagery from scratch. Here, "controllable disentangled representation learning" or what the article calls "imagination".

via University of Southern California: Yunhao Ge et al, Zero-shot Synthesis with Group-Supervised Learning. Open Review. https://openreview.net/forum?id=8wqCDnBmnrT


Thursday, September 2, 2021

Becoming the Artificial Unconscious


Yves Tanguy – Indefinite Divisibility – 1942

The first step to becoming a computer is to think like a computer. The next step to becoming a computer is to materialize those thoughts, perhaps in the form of pictures. 

The Surrealists did this with our collective unconscious. At a time in history when humans were becoming "modern", they took the dark mess of confusion that was our subconscious and made it visible. They used automatic artmaking, dream recall, and mind games like Exquisite Corpse to hack through our subconscious and bring back precious material to help us understand ourselves and what we were becoming. 

The above image is an example of the unconscious mind made visible by Yves Tanguy from 1942.

Today, we are again becoming new kinds of human. Our minds are merging with the computer algorithms we have created, so they in turn create us, in a never-ending feedback loop of evolution. 

Art has lots of purposes, but I like to think the most important purpose of all is to teach us how to be human. And today, being human has a lot do with being a computer. This is what computer-brains look like on the inside:

BigGAN-generated image by Mario Klingemann - 2019

They're also calling it social media performance art. But I think artists have been doing this for quite a while, under the name generative digital art, or algorithmic art. I just wonder what the Surrealists would say if they could see this stuff.

Post Script:
Melbourne artist and coder Sam Hains created Zero Likes, an AI trained to respond only to those lost and lonely images that miss out on attention.

Dog layers on a bed with a blanket - Sam Hains Zero Likes - 2017

Post Post Script:
First of all, the MIT Press has a journal called Leonardo. Next, here is a nice explanation of the artistic process by neuroscience researchers, written in the online science magazine Medical Express.

"Ultimately, we sought to explain the role of implicit learning processes in artistic cognition, or how the competition between different brain networks can lead to a more effective artistic intuition."

And they found that "weaker" prefrontal circuits, which are related to executive functions, can actually lead to more effective artistic cognition. The researchers refer to this phenomenon as the Andras effect.

"For example, if a photographer can tune down her control functions and access to long-term memories, she can perceive a 'different world'; a world without expectations or past memories," Nemeth said. "We can call this intuitive photography."

Kate Schipper et al. How do competitive neurocognitive processes contribute to artistic cognition? – The Andras-effect, Leonardo (2020). DOI: 10.1162/leon_a_02007

I'm also thinking now about how the chemical promiscuity of olfactory receptors makes mindful, environmental odor exploration (paying attention to smells) a great means to exercise your artistic mind.