Medical AI models rely on 'shortcuts' that could lead to misdiagnosis of COVID-19
Jun 2021, phys.org
Very interesting lesson for us humans, in terms of profiling and stereotypes:
The team found that, rather than learning genuine medical pathology, these models rely instead on shortcut learning to draw spurious associations between medically irrelevant factors and disease status. Here, the models ignored clinically significant indicators and relied instead on characteristics such as text markers or patient positioning that were specific to each dataset to predict whether someone had COVID-19.via University of Washington: AI for radiographic COVID-19 detection selects shortcuts over signal, Nature Machine Intelligence (2021). DOI: 10.1038/s42256-021-00338-7
Image credit: Sabine62 via Fractal Forums - Weaving - 2018
Computer scientist researches interpretable machine learning, develops AI to explain its discoveries
Nov 2020, phys.org
Finally a deep learning machine that can explain how it got its results (something previously not available, hence the term black box AI). Or is this just mansplaining? Nipsplaining.
This Looks Like That: Deep Learning for Interpretable Image Recognition, Chaofan Chen et al, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
Figure 1: Image of a clay colored sparrow and how parts of it look like some learned prototypical parts of a clay colored sparrow used to classify the bird’s species. -source |
DeepMind's AlphaZero breathes new life into the old art of chess
Sep 2020, phys.org
Maybe disturbing? We need the robot to help us learn to be human again? To be ... something? Again?
On the topic of playing chess vs practicing chess, and of course, on being human (vs being a computer) -- As chess grandmaster Vladimir Kramnik recently told Wired magazine, "For quite a number of games on the highest level, half of the game—sometimes a full game—is played out of memory. You don't even play your own preparation; you play your computer's preparation."
The solution? Change the game. With the help of AlphaZero, they discovered new variations on the game of chess that would force players to play again for the first time, I guess. Such changes made were to forbid castling, or introduce self-capture, or allow pawns to move two spaces at once. We evolve together.
Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess, arXiv:2009.04374 [cs.AI]
Post Script:
Research finds some AI advances are over-hyped
June 2020, phys.org
An article in Science magazine assessing the study cites a meta-analysis of information retrieval algorithms used in search engines over a decade though 2019 and found "the high mark was actually set in 2009." Another study of neural network recommendation systems used by streaming services determined that six of the seven procedures used failed to improve upon the simpler algorithms devised years earlier.
But isn't the new wave of neural networks about size? The algorithms themselves are rather simple, but it's the size of the network of gpu's times the size of the dataset that make the system so powerful.
Also, this sure sounds like an argument against the big dogs in tech inhibiting innovation through monopolistic control.
No comments:
Post a Comment