Sunday, January 26, 2025

The Offloading of Executive Function as Precursor to the Singularity


When they said the phone had become your exocortex, they were not kidding - the executive function of a human is being outsourced to computers (algorithms, AI, etc.); we're seeing it happen in real time. Not sure what this means for the future of humans, and not sure what the intorduction of the alphabet did in this regard, or what Socrates would have said about all this, but here we are:

(Don't forget the incentive here - with the offloading of executive control, we also offload liability.)

Internet addiction affects behavior and development of adolescents, study finds
Jun 2024, phys.org

Meta study of small groups but fMRI: 12 articles involving 237 young people aged 10–19 with a formal diagnosis of internet addiction between 2013 and 2023.

The effects of internet addiction were seen throughout multiple neural networks in the brains of adolescents. There was a mixture of increased and decreased activity in the parts of the brain that are activated when resting (the default mode network).

Meanwhile, there was an overall decrease in the functional connectivity in the parts of the brain involved in active thinking (the executive control network).

These changes were found to lead to addictive behaviors and tendencies in adolescents, as well as behavior changes associated with intellectual ability, physical coordination, mental health and development.

via University College London: Functional connectivity changes in the brain of adolescents with internet addiction: A systematic literature review of imaging studies, PLOS Mental Health (2024). DOI: 10.1371/journal.pmen.0000022



And now, speaking of executive function, it appears that the executives among us (i.e., managers) are the most at-risk and yet the most unwilling to recognize that they are squarely on the chopping block in the coming revolution:

Economist says hybrid work is a 'win-win-win' for productivity, performance and retention
Jun 2024, phys.org

Sometimes the truth is easy, not often but sometimes:

In a randomly controlled experiment on more than 1,600 workers at one of the world's largest online travel agencies, Trip.com, employees who worked from home for two days a week were just as productive and as likely to be promoted as their fully office-based peers.

On a third key measure, employee turnover, the results were also encouraging. Resignations fell by 33% among workers who shifted from working full-time in the office to a hybrid schedule. Women, non-managers, and employees with long commutes were the least likely to quit their jobs when their treks to the office were cut to three days a week. Trip.com estimates that reduced attrition saved the company millions of dollars.

Also, resignations fell only among non-managers; managers were just as likely to quit whether they were hybrid or not.

And, managers predicted on average that remote working would hurt productivity, only to change their minds by the time the experiment ended.

Opponents say that employee training and mentoring, innovation, and company culture suffer when workers are not on site five days a week, but critics often confuse hybrid for fully remote.

This suggests that problems with fully remote work arise when it's not managed well.

via Stanford Institute for Economic Policy Research: Nicholas Bloom, Hybrid working from home improves retention without damaging performance, Nature (2024). DOI: 10.1038/s41586-024-07500-2. 

Using AI to train AI: Model collapse could be coming for LLMs
Jul 2024, phys.org

The research shows that within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models. This is called model collapse. 

They found that feeding a model AI-generated data causes successive generations to degrade in their ability to learn, eventually leading to model collapse.

Nearly all of the recursively trained language models they tested tended to display repeating phrases. For example, a test was run using text about medieval architecture as the original input and by the ninth generation the output was a list of jackrabbits.

The authors propose that model collapse is an inevitable outcome of AI models that use training datasets created by previous generations.

via OATML Department of Computer Science University of Oxford: Ilia Shumailov et al, AI models collapse when trained on recursively generated data, Nature (2024). DOI: 10.1038/s41586-024-07566-y

Also: Emily Wenger, AI produces gibberish when trained on too much AI-generated data, Nature (2024). DOI: 10.1038/d41586-024-02355-z , doi.org/10.1038/d41586-024-02355-z

Autonomy boosts college student attendance and performance
Jul 2024, phys.org

Students were given the choice to make their own attendance mandatory. Contradicting common faculty beliefs, 90% of students in the initial study chose to do so, committing themselves to attending class reliably or to having their final grades docked. Under this "optional-mandatory attendance" policy, students came to class more reliably than students whose attendance had been mandated.

Like too many rules make the willpower weak.

via Carnegie Mellon University: Simon Cullen et al, Choosing to learn: The importance of student autonomy in higher education, Science Advances (2024). DOI: 10.1126/sciadv.ado6759


Breaking MAD: Generative AI could break the internet
Jul 2024, phys.org

Model Autophagy Disorder (MAD) by analogy to mad cow disease. Auto phage means eating yourself.

First peer-reviewed work on AI autophagy:

"The problems arise when this synthetic data training is, inevitably, repeated, forming a kind of a feedback loop--what we call an autophagous or 'self-consuming' loop," said Richard Baraniuk, Rice's C. Sidney Burrus Professor of Electrical and Computer Engineering. "Our group has worked extensively on such feedback loops, and the bad news is that even after a few generations of such training, the new models can become irreparably corrupted. This has been termed 'model collapse' by some--most recently by colleagues in the field in the context of large language models (LLMs). We, however, find the term 'Model Autophagy Disorder' (MAD) more apt, by analogy to mad cow disease."

via Rice University Digital Signal Processing group: Self-Consuming Generative Models Go MAD. Sina Alemohammad et al. https://doi.org/10.48550/arXiv.2307.01850

Post Script: Contemplate the possibility of us having to rebuild the internet from scratch and on purpose, so like there will be literally armies of artists, writers and musicians, etc., who's sole job is to create authentic, original training data. (Idea stolen from artist Ben Grosser)


And on the other hand:
A person's intelligence limits their computer proficiency more than previously thought, say researchers
Sep 2024, phys.org

"Everyday user interfaces have simply become too complex to use."

"It is clear that differences between individuals cannot be eliminated simply by means of training; in the future, user interfaces need to be streamlined for simpler use. This age-old goal has been forgotten at some point, and awkwardly designed interfaces have become a driver for the digital divide. We cannot promote a deeper and more equal use of computers in society unless we solve this basic problem."

"However, the research findings also show that age remains the most important factor in how well an individual can use applications. Older people clearly took more time to complete their tasks, and they also felt that the assignments were more burdensome."

via Aalto University Department of Information and Communications Engineering and the University of Helsinki Department of Psychology:  Erik Lintunen et al, Cognitive abilities predict performance in everyday computer tasks, International Journal of Human-Computer Studies (2024). DOI: 10.1016/j.ijhcs.2024.103354

Post Script on too much executive function!?
Research AI model unexpectedly modified its own code to extend runtime
Aug 2024, Ars Technica

"The AI Scientist automates the entire research lifecycle. From generating novel research ideas, writing any necessary code, and executing experiments, to summarizing experimental results, visualizing them, and presenting its findings in a full scientific manuscript." It's run by Tokyo-based AI research firm Sakana AI.

"In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."
Nice.

Post Post Script, on Automation and the Illusion of Control
Partial automated driving systems don’t make driving safer, study finds
Jul 2024, Ars Technica

Whatever you do, don't read this sentence: "Everything we’re seeing tells us that partial automation is a convenience feature like power windows or heated seats rather than a safety technology," said David Harkey, president of the Insurance Institute for Highway Safety.


No comments:

Post a Comment