Artificial neurons go quantum with photonic circuits
Mar 2022, phys.org
Good copy:
At the heart of all artificial intelligence applications are mathematical models called neural networks. These models are inspired by the biological structure of the human brain, made of interconnected nodes. Just like our brain learns by constantly rearranging the connections between neurons, neural networks can be mathematically trained by tuning their internal structure until they become capable of human-level tasks: recognizing our face, interpreting medical images for diagnosis, even driving our cars. Having integrated devices capable of performing the computations involved in neural networks quickly and efficiently has thus become a major research focus, both academic and industrial.One of the major game changers in the field was the discovery of the memristor, made in 2008. This device changes its resistance depending on a memory of the past current, hence the name memory-resistor, or memristor. Immediately after its discovery, scientists realized that (among many other applications) the peculiar behavior of memristors was surprisingly similar to that of neural synapses. The memristor has thus become a fundamental building block of neuromorphic architectures.
via University of Vienna: Michele Spagnolo, Experimental photonic quantum memristor, Nature Photonics (2022). DOI: 10.1038/s41566-022-00973-5
Image credit: Topological Defects, Oleg Lavrentovich at Kent State University, 2006 [link]
Neuromorphic simulations can yield computational advantages relevant to many applications
Mar 2022, phys.org
via Sandia National Laboratories: J. Darby Smith et al, Neuromorphic scaling advantages for energy-efficient random walk computations, Nature Electronics (2022). DOI: 10.1038/s41928-021-00705-7
Study highlights the potential of neuromorphic architectures to perform random walk computations
Apr 2022, phys.org
via Sandia National Laboratories: Neuromorphic scaling advantages for energy-efficient random walk computations. Nature Electronics(2022). DOI: 10.1038/s41928-021-00705-7.
How to build brain-inspired neural networks based on light
Apr 2022, phys.org
via Eindhoven University of Technology: Bin Shi et al, Deep Neural Network Through an InP SOA-Based Photonic Integrated Cross-Connect, IEEE Journal of Selected Topics in Quantum Electronics (2019). DOI: 10.1109/JSTQE.2019.2945548
Neuromorphic memory device simulates neurons and synapses
May 2022, phys.org
via The Korea Advanced Institute of Science and Technology KAIST: Sang Hyun Sung et al, Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse, Nature Communications (2022). DOI: 10.1038/s41467-022-30432-2
Topological matter - Nature - Jul 2016 |
Demonstrating significant energy savings using neuromorphic hardware
May 2022, phys.org
The "Loihi" chip can get up to sixteen times more energy-efficiency than non-neuromorphic hardware.These chips are chasing something the brain already does naturally, and much more efficiently than our conventional chips, because our brain stores information as something called "internal variables" which are from the used neurons in a network getting fatigued, and then just measuring which ones in the network are fatigued, to know which ones were just activated. Neurons are storing memory simply by not working, and that's about as energy efficient as you can get.
via Graz University of Technology's Institute of Theoretical Computer Science and Intel Labs, and supported by The Human Brain Project: Arjun Rao et al, A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware, Nature Machine Intelligence (2022). DOI: 10.1038/s42256-022-00480-w
Ultrafast 'camera' captures hidden behavior of potential 'neuromorphic' material
May 2022, phys.org
"Vanadium dioxide is one of the rare, amazing materials that has emerged as a promising candidate for neuro-mimetic bio-inspired devices"
via Brookhaven National Laboratory: Junjie Li et al, Direct Detection of V-V Atom Dimerization and Rotation Dynamic Pathways upon Ultrafast Photoexcitation in VO2, Physical Review X (2022). DOI: 10.1103/PhysRevX.12.021032
A neuromorphic computing architecture that can run some deep neural networks more efficiently
Jun 2022, phys.org
In their experiments, Maass and his colleagues showed that the tendency of many biological neurons to rest after spiking could be replicated in neuromorphic hardware and used as a "computational trick" to solve time series processing tasks more efficiently. In these tasks, new information needs to be combined with information gathered in the recent past (e.g., sentences from a story that the network processed beforehand)."We showed that the network just needs to check which neurons are currently most tired, i.e., reluctant to fire, since these are the ones that were active in the recent past," Maass said. "Using this strategy, a clever network can reconstruct based on what information was recently processed. Thus, 'laziness' can have advantages in computing."
via Graz University of Technology and Intel and funded by the Human Brain Project: Arjun Rao et al, A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware, Nature Machine Intelligence (2022). DOI: 10.1038/s42256-022-00480-w
Topological Solitons - Soft Matter Publishing - 2020 |
A chip that can classify nearly 2 billion images per second
Jun 2022, phys.org
Optical Deep Neural Network:
"Our chip processes information through what we call 'computation-by-propagation,' meaning that unlike clock-based systems, computations occur as light propagates through the chip," says Aflatouni. "We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology.""When current computer chips process electrical signals they often run them through a Graphics Processing Unit, or GPU, which takes up space and energy," says Ashtiani. "Our chip does not need to store the information, eliminating the need for a large memory unit.""A movie usually plays between 24 and 120 frames per second. This chip will be able to process nearly 2 billion frames per second! For problems that require light speed computations, we now have a solution, but many of the applications may not be fathomable right now."
You heard the man. Fathom away.
via University of Pennsylvania: Farshid Ashtiani et al, An on-chip photonic deep neural network for image classification, Nature (2022). DOI: 10.1038/s41586-022-04714-0
New hardware offers faster computation for artificial intelligence, with much less energy
Jul 2022, phys.org
Massive:
Practical inorganic material in the fabrication process enables devices to run 1 million times faster than previous versions, which is also 1 million times faster than the synapses in the human brain.Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial "neurons" and "synapses" that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing."Analog deep learning" - computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor."Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn't damage anything, thanks to the small size and low mass of protons. It is almost like teleporting."
via MIT's Department of Electrical Engineering and Computer Science: Murat Onen et al, Nanosecond protonic programmable resistors for analog deep learning, Science (2022). DOI: 10.1126/science.abp8064
No comments:
Post a Comment