Solving the 'big problems' via algorithms enhanced by 2D materials
Jan 2022, phys.org
Imagining the future is hard; you almost never get it right. You just can't see what's not there. But in this case, a glimpse reveals itself -- every "computer" will be designed for a specific algorithm. There won't be a "new" way of making a computer, or of making a one-size-fits-all computer that's faster or better. The very idea of a one-size-fits-all computer is what makes it hard for us to see the future. Before the electric guitar was invented, the acoustic guitar didn't exist, it was just called a guitar. (Like smartphones and dumbphones.)
Eventually, you won't have an advanced computer that can run different algorithms better, instead the computer and the algorithm will be one, and therefore there will be as many types of computers as there are algorithms. Like the Cambrian explosion, but different.
The "combinatorial optimization problem" they're solving here is also referred to as the "traveling salesman problem", or the "design an optimal transit system based on the terrain of the region, distribution of the population, existing routes, etc.", or the "use a living slime mold computer to design an optimal transit system" problem.
The reason it's so hard for our current algorithms to do the optimization problem is because computers as we know them today are still based on a design from the 1940's. The problem isn't so much because they're old, it's because we don't work with data the same way we used to. We have a lot more data than we used to, and integrating it all at the same time is hard for today's computers.
I like to think of it simply as a problem where your database has as many columns as it does rows. This is what happens when you try to categorize smells based on the names we call them. On one axis you have all the smellable molecules there are (veritably infinite), and on the other, you have all the attributes you can give to any one of the molecules (physical dimensions, descriptions, names, autobiographical physiodata that your body associates with the molecule, which is also veritably infinite). You would then have a database, a spreadsheet of infinite cells. It's hard to work with something that big.
But we don't have to do things like that anymore. Instead, we can use slime mold, or we can design "new computers" that combine information storage and computing into the same thing. This sounds a lot like a neuromorphic computer, by the way.
via Pennsylvania State University: Amritanand Sebastian et al, An Annealing Accelerator for Ising Spin Systems Based on In‐Memory Complementary 2D FETs, Advanced Materials (2021). DOI: 10.1002/adma.202107076
Image credit: Flows of individuals across the Greater Boston area, Guangyu Du at Sante Fe Inst, 2021
Post Script:
And how they do it? A form of "in-memory computing" based on simulated annealing, where atoms reorganize themselves and then crystallize in the lowest energy state. Sounds a lot like 2-D metamaterials, BECs and quantum crystallography. Putting it all together.
Notes:
Using a 'virtual slime mold' to design a subway network less prone to disruption
Feb 2022, phys.org
A model, no slime needed.
via University of Toronto: Raphael Kay et al, Stepwise slime mould growth as a template for urban design, Scientific Reports (2022). DOI: 10.1038/s41598-022-05439-w
No comments:
Post a Comment