Friday, March 25, 2022

Neuromimetics


Nobody uses the word neuromimetic, but that's what it is. 

Instead, we're calling these new computer chips neuromorphic, which you would think means that their design is shaped (morphed) like the brain. Except that's not what it means -- instead neuromorphic computer chips ----behave---- like the brain. I would call that mimetic. 

This post is a collection of news articles about the next generation in computing -- no, not GPUs, I know we're just getting familiar with those, but the next thing is already here. This next, next thing coming is a TPU (tensor processing unit), and it's got something to do with the memory being on the chip, so you don't need to go across the bus to an external memory. Sorry, not a computer scientist here, just trying to get the basic idea.

These new chips can create a new type of architecture that's really good at really large datasets, which we happen to have (digital content on track to equal half Earth's mass by 2245).

These chips act more like synapses, hence the term neuromorphic. I'll be drifting in and out of this topic specifically, since some of these are just regular old GPU-based neural networks, which as you would guess by their names, are a kind of neuromorphic hardware in themselves. Also, cerebral organoids. Less mimetic. They're made of actual brain tissue, but we can't talk about brain-like things without talking about those. 


New approach found for energy-efficient AI applications
Mar 2021, phys.org

Using not just spike activation and inhibition but the temporal pattern, that reduces the dimensions...

I get confused when someone talks about "artificial" neural network, because I thought the whole thing was an artificial brain to begin with. But alas --

"This low energy consumption is made possible by inter-neuronal communication by means of very simple electrical impulses, so-called spikes. The information is thereby encoded not only by the number of spikes, but also by their time-varying patterns. "You can think of it like Morse code. The pauses between the signals also transmit information," Maass explains.

via Graz University of Technology: C. Stoeckl and W. Maass. Optimized spiking neurons can classify images with high accuracy through temporal coding with two spikes. Nature Machine Intelligence. (2021) DOI: 10.1038/s42256-021-00311-4


'Edge of chaos' opens pathway to artificial intelligence discoveries
Jul 2021, phys.org

An artificial network of nanowires can be tuned to respond in a brain-like way when electrically stimulated. Keeping the network of nanowires in a brain-like state "at the edge of chaos", it performed tasks at an optimal level. Electrical signals put through this network automatically find the best route for transmitting information. And this architecture allows the network to 'remember' previous pathways through the system.

This, they say, suggests the underlying nature of neural intelligence is physical, and their discovery opens an exciting avenue for the development of artificial intelligence.

via University of Sydney and Japan's National Institute for Material Science: Nature Communications (2021). DOI: 10.1038/s41467-021-24260-z

Tiny brains grown in 3D-printed bioreactor
Apr 2021, phys.org

Exactly what it sounds like.

via American Institute of Physics, MIT and the Indian Institute of Technology Madras: "A low-cost 3D printed microfluidic bioreactor and imaging chamber for live-organoid imaging" Biomicrofluidics (2021). aip.scitation.org/doi/10.1063/5.0041027


Team presents brain-inspired, highly scalable neuromorphic hardware
Aug 2021, phys.org

Good explanation in the writeup by KAIST :

Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices.

To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a "Janus structure' that functions as both neuron and synapse, just like coins have heads and tails.

via The Korea Advanced Institute of Science and Technology: Joon-Kyu Han et al, Cointegration of single-transistor neurons and synapses by nanoscale CMOS fabrication for highly scalable neuromorphic hardware, Science Advances (2021). DOI: 10.1126/sciadv.abg8836


Reappraisal of Moore's law through chip density
Aug 2021, phys.org

"DRAM chips as model organisms for the study of technological evolution."

Kevin Kelly has entered the chat. When I hear people talking about computer chips as evolving organisms, I think about What Technology Wants, the 2010 book by Kevin Kelly, where he says that technology is a lifeform that evolves, and also one which manipulates us for its own evolution. 

But the real reason we're posting this article:

The next growth spurt in transistor miniaturization and computing capability is now overdue, they say.

They're saying that Moore's Law is about to jump again, and the DRAM chips, which aren't too much different from neuromorphic architectures, are about to help us make that jump. 
[Moore's Law]

"The end of silicon chip era is in view"

via Rockefeller University:  Moore's Law Revisited through Intel Chip Density. David Burg and Jessee H Ausubel. PLOS ONE (2021). https://doi.org/10.1371/journal.pone.0256245


Artificial brain networks simulated with new quantum materials
Sep 2021, phys.org

By combining new supercomputing materials with specialized oxides, the researchers successfully demonstrated the backbone of networks of circuits and devices that mirror the connectivity of neurons and synapses in biologically based neural networks.

"Neuromorphic computing is inspired by the emergent processes of the millions of neurons, axons and dendrites that are connected all over our body in an extremely complex nervous system." -UC president and physicist Robert Dynes

The researchers' innovation was based on joining two types of quantum substances—superconducting materials based on copper oxide and metal insulator transition materials that are based on nickel oxide. They created basic "loop devices" that could be precisely controlled at the nano-scale with helium and hydrogen, reflecting the way neurons and synapses are connected. Adding more of these devices that link and exchange information with each other, the simulations showed that eventually they would allow the creation of an array of networked devices that display emergent properties like an animal's brain.

via University of California San Diego and Purdue University: Uday S. Goteti et al, Low-temperature emergent neuromorphic networks with correlated oxide devices, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2103934118


GPUs open the potential to forecast urban weather for drones and air taxis
Oct 2022, phys.org

This paper is interesting; it talks about microscale airflow patterns in cities with high buildings. But I thought it would be a good idea to just repaste this explanation right here, for people who haven't noticed --

CPUs excel at performing multiple tasks, including control, logic, and device-management operations, but their ability to perform fast arithmetic calculations is limited. GPUs are the opposite. Originally designed to render 3D video games, GPUs are capable of fewer tasks than CPUs, but they are specially designed to perform mathematical calculations very rapidly.

And just wait, because TPUs are right on their heels.

via National Center for Atmospheric Research: Domingo Muñoz‐Esparza et al, Efficient Graphics Processing Unit Modeling of Street‐Scale Weather Effects in Support of Aerial Operations in the Urban Environment, AGU Advances (2021). DOI: 10.1029/2021AV000432


Intel launches its next-generation neuromorphic processor—so, what’s that again?
Oct 2021, Ars Technica

Unlike a normal processor, there's no external RAM. Instead, each neuron has a small cache of memory dedicated to its use. This includes the weights it assigns to the inputs from different neurons, a cache of recent activity, and a list of all the other neurons that spikes are sent to.

Also, re Intel's new chip"

Other changes are very specific to spiking neural networks. The original processor's spikes, as mentioned above, only carried a single bit of information. In Loihi 2, a spike is an integer, allowing it to carry far more information and to influence how the recipient neuron sends spikes. (This is a case where Loihi 2 might be somewhat less like the neurons it's mimicking in order to perform calculations better.)

via Intel's "Loihi 2: A New Generation of Neuromorphic Computing", 2021

Post Script:
Hiddenite: A new AI processor for reduced computational power consumption based on a cutting-edge neural network theory
Feb 2022, phys.org

Hiddenite: hidden neural network inference tensor engine.

It's an "accelerator chip" that "does neural networks better" by being better at pruned neural networks, which are also called "hidden neural networks", and by reducing memory needs, which is a big deal in the age of big data. 

And how often do we get to see RNGs in practical application in the news -- "The Hiddenite architecture (Fig. 2) offers three-fold benefits to reduce external memory access and achieve high energy efficiency. The first is that it offers the on-chip weight generation for re-generating weights by using a random number generator. This eliminates the need to access the external memory and store the weights."

They're also four-dimensional (4D) parallel processors. 

I don't see the words neuromorphic or TPU in here, but I imagine it's not too distantly related.

via Tokyo Institute of Technology: Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet, 15.4, ML Processors LIVE Q&A with demonstration, February 23 9:00AM PST, International Solid-State Circuits Conference 2022 (ISSCC 2022).

No comments:

Post a Comment