Friday, July 26, 2024

Calling All Alphas


Using this picture just for the prompt: 
A flowchart of scientific icons that represents a partner ecosystem for use as a slide background. modern. brand colours are #024da1 and #37b884. blues and greens. ultrahd, kodak colour quality, 8k.

First, just what you've been waiting for, a robot that can read braille:

3D-printed hairs: Professor developing tiny sensors to detect flow and environmental changes
Oct 2023, phys.org

Tiny, 3D-printed sensors that look like human hairs can sense sustained pressures, quick pressures,  temperature changes, and sliding force -- Uses could include minimally-invasive surgical robots equipped with cilia mechanoreceptors to better detect minute changes in pressure or temperature, industrial machines that can measure air or water flow, a robot that can read braille.

via Virginia Commonwealth University: Phillip Glass et al, 3D‐Printed Artificial Cilia Arrays: A Versatile Tool for Customizable Mechanosensing, Advanced Science (2023). DOI: 10.1002/advs.202303164


Next, there's a couple themes mixing and matching here, but the idea is that the Big Data revolution of the last decade is really the Big Training Set for the artificial minds of the future. We're getting another planet's worth of mental labor here, like having an extra few million chemists, physicists, biologists. But then there's the overall computing infrastructure that we've amassed, and that we can apparently use the latent energy, the idle power in between processes, to run calculations. So the computer can perform calculations, on itself, while it does these other things, maybe we would call it a meta-computer, or meta-computation, or sentience (jk). In the end we see that humans are still, and will be for a long time, an essential ingredient:

Google’s DeepMind finds 2.2M crystal structures in materials science win
Nov 2023, Ars Technica

The trove of theoretically stable but experimentally unrealized combinations identified using an AI tool known as GNoME is more than 45 times larger than the number of such substances unearthed in the history of science, according to a paper published in Nature on Wednesday.

DeepMind's AI system AlphaGeometry able to solve complex geometry problems at a high level
Jan 2024, phys.org

via Google's DeepMind and NYU: Trieu H. Trinh et al, Solving olympiad geometry without human demonstrations, Nature (2024). DOI: 10.1038/s41586-023-06747-5


Chemists use blockchain to simulate more than 4 billion chemical reactions essential to origins of life
Jan 2024, phys.org

Great - this whole thing is freaking me out but not sure why; combining distributed computing with biological research is the plot for a good algorithmic overlord origin story though:

Researchers chose a set of starting molecules likely present on early Earth, including water, methane, and ammonia, and set rules about which reactions could occur between different types of molecules, then translated this information into a language understandable by computers, then used the blockchain to calculate which reactions would occur over multiple expansions of a giant reaction network.

They generate the network using Golem, a platform that orchestrates portions of the calculations over hundreds of computers across the world, which receive cryptocurrency in exchange for computing time. For a fraction of the cost, in two or three months, we finished a task of 10 billion reactions, 100k times bigger than we did previously."

The resulting network, termed NOEL for the Network of Early Life, started off with more than 11 billion reactions, which the team narrowed down to 4.9 billion plausible reactions. NOEL contains parts of well-known metabolic pathways like glycolysis, close mimics of the Krebs cycle, which organisms use to generate energy, and syntheses of 128 simple biotic molecules like sugars and amino acids.

Curiously, of the 4.9 billion reactions generated, only hundreds of reaction cycles could be called "self-replicating," which means that the molecules produce additional copies of themselves.

The part that scares: "With a platform like Golem you can connect your institution's network and harness the entire idle power of its computers to perform calculations. You could create this computing infrastructure without any capital expenditure."

via Korea Institute for Basic Science, the Polish Academy of Sciences, and Allchemy: Emergence of metabolic-like cycles in blockchain-orchestrated reaction networks., Chem (2024). DOI: 10.1016/j.chempr.2023.12.009.


Widely used AI tool for early sepsis detection may be cribbing doctors' suspicions
Feb 2024, phys.org

Humans all the way down 

The Epic Sepsis Model is an electronic medical record software that automatically generates sepsis risk estimates in the records of hospitalized patients, and serves 54% of patients in the United States and 2.5% of patients internationally.

"Sepsis has all these vague symptoms ... We still miss a lot of patients with sepsis"

The hope is that AI predictions could be instrumental in making that happen, but at present, they don't seem to be getting more out of patient data than clinicians are.

"We suspect that some of the health data that the Epic Sepsis Model relies on encodes, perhaps unintentionally, clinician suspicion that the patient has sepsis"

When including the predictions made by the AI at all stages of the patient's hospital stay, the AI could correctly identify a high-risk patient 87% of the time. However, the AI was only correct 62% of the time when using patient data recorded before the patient met criteria for having sepsis. Perhaps most telling, the model only assigned higher risk scores to 53% patients who got sepsis when predictions were restricted to before a blood culture had been ordered.

via University of Michigan: Fahad Kamran et al, Evaluation of Sepsis Prediction Models before Onset of Treatment, NEJM AI (2024). DOI: 10.1056/AIoa2300032

Post Script: Public service announcement - mostly any study, report, quip, or advertisement you hear about "what an AI can do" is worthless (when in reference to commercial entities like ChatGPT or OpenAI). We measure it however we want, with no means of normalizing our inquiries, and we can't even see into the base algorithms anyway because they're all proprietary. It takes a study like this to find out what's really going on, and it's for a system that's already implemented in "54% of patients in the United States and 2.5% of patients internationally" (^above)

No comments:

Post a Comment