As a wet bag of twitching proteins, your biosignatures are unavoidable. Everything you do sends a signal, waiting for someone, something to recognize your existence, your intent, your fate. You are not a mystery; your every move, every decision, every thought and desire, they are all being broadcast, in myriad ways. You think that CIA agent has a superpower because she can tell that you're lying by looking at the micro-twitches on your face? Nothing. We are no match for what's coming. And if we don't become half robot very soon, the Anthropocene will be marked not by plutonium or "plastic rocks" but by the sudden disappearance of humans from the fossil record. Let's begin:
Eye movements can be decoded by the sounds they generate in the ear, study shows
Nov 2023, phys.org
Fucking "ear squeaks" -
In 2018, Groh's team discovered that the ears make a subtle, imperceptible noise when the eyes move; the Duke team now shows that these sounds can reveal where your eyes are looking.
via Duke University: Stephanie N. Lovich et al, Parametric information about eye movements is sent to the ears, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2303562120
AI-powered satellite analysis reveals the unseen economic landscape of underdeveloped nations
Dec 2023, phys.org
The researchers used Sentinel-2 satellite images from the European Space Agency (ESA) that are publicly available. They split these images into small six-square-kilometer grids. At this zoom level, visual information such as buildings, roads, and greenery can be used to quantify economic indicators.The key feature of their research model is the "human-machine collaborative approach," which lets researchers combine human input with AI predictions for areas with scarce data. In this research, 10 human experts compared satellite images and judged the economic conditions in the area, with the AI learning from this human data and giving economic scores to each image. The results showed that the Human–AI collaborative approach outperformed machine-only learning algorithms.
via KAIST Korea Advanced Institute of Science and Technology: Donghyun Ahn et al, A human-machine collaborative approach measures economic development using satellite imagery, Nature Communications (2023). DOI: 10.1038/s41467-023-42122-8
Artificial intelligence can predict events in people's lives, researchers show
Dec 2023, phys.org
Researchers have analyzed health data and attachment to the labor market for 6 million Danes in a model dubbed life2vec. Then they trained it, and asked for answers to general questions such as: 'death within four years'?Results are consistent with existing findings within the social sciences; for example, all things being equal, individuals in a leadership position or with a high income are more likely to survive, while being male, skilled or having a mental diagnosis is associated with a higher risk of dying.
In a way, this thing is sequencing the events of a person's life, and making a prediction the same way it can already look at the words in your prompt and produce what should be the expected response.
via Technical University of Denmark, University of Copenhagen, ITU, and Northeastern University: Sune Lehmann, Using sequences of life-events to predict human lives, Nature Computational Science (2023). DOI: 10.1038/s43588-023-00573-5.
Researchers develop algorithm that crunches eye-movement data of screen users
Feb 2024, phys.org
Hold onto your eyeballs
Raw Eye Tracking and Image Ncoder Architecture (RETINA) can zero in on selections before people even made their decisions.
Can you read this and ask yourself on what planet you would ever want this?
The algorithm could be applied in many settings by all types of companies. For example, a retailer like Walmart could use it to enhance the virtual shopping experiences they are developing in the metaverse, a shared, virtual online world. Many of the VR devices people will use to explore the metaverse will have built-in eye tracking to help better render the virtual environment. With this algorithm, Walmart could tailor the mix of products on display in their virtual store to what a person will likely choose, based on their initial eye movements."Even before people have made a choice, based on their eye movement, we can say it's very likely that they'll choose a certain product," Wedel says. "With that knowledge, marketers could reinforce that choice or try to push another product instead."The researchers are already working to commercialize the algorithm and extend their research to optimize decision-making."We think eye tracking will become available at very large scales"
via (get ready) University of Maryland's PepsiCo Chair in Consumer Science in the Robert H. Smith School of Business, as well as Tel Aviv University and New York University: Moshe Unger et al, Predicting consumer choice from raw eye-movement data using the RETINA deep learning architecture, Data Mining and Knowledge Discovery (2023). DOI: 10.1007/s10618-023-00989-7
AI Art - Affluent Gentleman w Money Surrounded by Envious People 2 - 2024 |
Study discovers neurons in the human brain that can predict what we are going to say before we say it
Feb 2024, phys.org
"Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech - including coming up with the words we want to say, planning the articulatory movements and producing our intended vocalizations"New neural probes allow scientists to see certain neurons become active before a phoneme is spoken out loud.Neuropixel probes were first pioneered at Massachusetts General Hospital and are smaller than the width of a human hair, yet have hundreds of channels capable of simultaneously recording the activity of dozens or even hundreds of individual neurons"
via Massachusetts General Hospital and Harvard Medical School: Arjun R. Khanna et al, Single-neuronal elements of speech production in humans, Nature (2024). DOI: 10.1038/s41586-023-06982-w
Improving traffic signal timing with a handful of connected vehicles
Feb 2024, phys.org
With GPS data from as little as 6% of vehicles on the road, the team used connected vehicle data, resulting in a 20% to 30% decrease in the number of stops at signalized intersections."While detectors at intersections can provide traffic count and estimated speed, access to vehicle trajectory information, even at low penetration rates, provides more valuable data including vehicle delay, number of stops, and route selection"
via University of Michigan Center for Connected and Automated Transportation: Xingmin Wang et al, Traffic light optimization with low penetration rate vehicle trajectory data, Nature Communications (2024). DOI: 10.1038/s41467-024-45427-4
Post Script: It occurs to me that we have here one of the use cases for connected services data collected by your car likely without you knowing about it, "The team used connected vehicle data insights provided by General Motors to test its system ...".
Smartphone app uses AI to detect depression from facial cues
Feb 2024, phys.org
It's really instructive how easy it is to fuck things up real good - instead of being a boon to mental health, this sounds like complete dystopia, where humans have lost all control over their lives and live in absolute subjugation to machines infinitely smarter than us and upon whom we are hopelessly reliant for everyday existence (the simple act of unlocking your phone...)
MoodCapture took 125,000 images of 177 participants over 90 days. A first group of participants was used to program MoodCapture; if they answered the question, "I have felt down, depressed, or hopeless" from the eight-point Patient Health Questionnaire or PHQ-8, the program correlated self-reports of feeling depressed with specific facial expressions such as gaze, eye movement, positioning of the head, and muscle rigidity, and environmental features such as dominant colors, lighting, photo locations, and the number of people in the image.The new study shows that passive photos are key to successful mobile-based therapeutic tools, Campbell said. They capture mood more accurately and frequently than user-generated photographs—or selfies—and do not deter users by requiring active engagement."These neutral photos are very much like seeing someone in-the-moment when they're not putting on a veneer, which enhanced the performance of our facial-expression predictive model," Campbell said.
via Dartmouth College: MoodCapture: Depression Detection using In-the-Wild Smartphone Images, arXiv (2024). DOI: 10.1145/3613904.3642680. arxiv.org/pdf/2402.16182.pdf
AI model trained with images can recognize visual indicators of gentrification
Mar 202,4 phys.org
Wow, science
The ten-year U.S. Census and the five-year American Community Survey are aggregated by census tract rather than building by building; not sufficiently fine-grained.Now they're using visual cues of gentrification like new construction or renovations from Google Street View images for entire cities.They got construction permits to identify where construction was planned, and extracted data on business upscaling (laundry to coffee shop; grocery to high-end restaurant) from a national business directory. Then, manually looking at pairs of images from 2007 through 2022 from the full Google Street View data set for three cities Oakland, Denver, and Seattle.About 74% of the time the model predicted gentrification in the same places where gentrification had been previously found in other studies.Interestingly, the model identified a significant number of what could be false positives—census tracts the model labeled as gentrifying that had been labeled non-gentrifying in the past. These could have been errors by the model, but when the researchers looked at paired images in those census tracts, they found what looked like gentrification—new apartment buildings and neighborhood upgrades.The conclusion: Because the model leveraged granular street-level imagery, it seemed to be spotting early signs of gentrification that previous studies had missed.
via Stanford: Tianyuan Huang et al, CityPulse: Fine-Grained Assessment of Urban Change with Street View Time Series, arXiv (2024). DOI: 10.48550/arxiv.2401.01107
Also: Tianyuan Huang et al, Detecting Neighborhood Gentrification at Scale via Street-level Visual Data, 2022 IEEE International Conference on Big Data (Big Data) (2023). DOI: 10.1109/BigData55660.2022.10020341
AI Art - Affluent Gentleman w Money Surrounded by Envious People 3 - 2024 |
Machine learning tools can predict emotion in voices in just over a second
Mar 2024, phys.org
As good as any human they say
"Machine learning can be used to recognize emotions from audio clips as short as 1.5 seconds. Our models achieved an accuracy similar to humans when categorizing meaningless sentences with emotional coloring spoken by actors."The researchers drew nonsensical sentences from two datasets - one Canadian, one German - which allowed them to investigate whether ML models can accurately recognize emotions regardless of language, cultural nuances, and semantic content.Each clip was shortened to a length of 1.5 seconds, as this is how long humans need to recognize emotion in speech. It is also the shortest possible audio length in which overlapping of emotions can be avoided.The emotions included in the study were joy, anger, sadness, fear, disgust, and neutral.Deep neural networks filter sound components like frequency or pitch, for example when a voice is louder because the speaker is angry—to identify underlying emotions.Convolutional neural networks scan for patterns in the visual representation of soundtracks, much like identifying emotions from the rhythm and texture of a voice.This hybrid model merges both techniques.
But alas, good-as-human is not better-than-human:
"We wanted to set our models in a realistic context and used human prediction skills as a benchmark," Diemerling explained. "Had the models outperformed humans, it could mean that there might be patterns that are not recognizable by us." The fact that untrained humans and models performed similarly may mean that both rely on resembling recognition patterns, the researchers said.
via Center for Lifespan Psychology at the Max Planck Institute for Human Development: Implementing Machine Learning Techniques for Continuous Emotion Prediction from Uniformly Segmented Voice Recordings, Frontiers in Psychology (2024). DOI: 10.3389/fpsyg.2024.1300996
Robotic face makes eye contact, uses AI to anticipate and replicate a person's smile before it occurs
Mar 2024, phys.org
Coexpression - when a person, or a robot, smiles at you while you're smiling at them.Emo is a robot that anticipates facial expressions and executes them simultaneously with a human. It can predict a forthcoming smile about 840 milliseconds before the person smiles.Emo could predict people's facial expressions by observing tiny changes in their faces as they begin to form an intent to smile.
via the Creative Machines Lab at Columbia University School of Engineering and Applied Science: Yuhang Hu et al, Human-robot facial coexpression, Science Robotics (2024). DOI: 10.1126/scirobotics.adi4724
Also: Rachael E. Jack, Teaching robots the art of human social synchrony, Science Robotics (2024). DOI: 10.1126/scirobotics.ado5755
Exploring the factors that influence people's ability to detect lies online
Apr 2024, phys.org
People were more suspicious of others if they had themselves lied during the game, but also when other players had reported holding a statistically unlikely card.When compared to the predictions of an artificial, simulated lie detector, poor lie detection was associated with an over-reliance on one's own honesty (or dishonesty) and an under-reliance on statistical cues.These findings imply that honest people may be particularly susceptible to scams, because they are the least likely to suspect a lie and thus detect a scam.Moreover, as social media platforms use recommendation systems that feed people with more of the same content they like, these systems distort the likelihood of seeing certain information - fake news included.People's natural reliance on statistical likelihoods to infer what is true thus will not work well in these contexts.
via University College London (UCL) and Massachusetts Institute of Technology: Sarah Ying Zheng et al, Poor lie detection related to an under-reliance on statistical cues and overreliance on own behaviour, Communications Psychology (2024). DOI: 10.1038/s44271-024-00068-7
No comments:
Post a Comment