Tuesday, April 15, 2025

To Anthropomorphize or Not To Anthropomorphize


AKA I Am Not A Vacuum Cleaner Instruction Manual!

My head is spinning in circles - First I ask myself why do we insist that computer programs are human? (I know the answer, because it's money, because it's always money.) But then, I realize it copies people, and so it's not that it's inherently human, but that it's made to copy people and act like people, and by that, it's like a human. But then I realize that it's copying broken humans (sorry broken humans, I am one of you too), and so the computer is not only like a human, but it's like a broken human:

Therapy for ChatGPT? How to reduce AI 'anxiety'
Mar 2025, phys.org

"Therapeutic interventions"

The team is now the first to use this technique therapeutically, as a form of "benign prompt injection. Using GPT-4, we injected calming, therapeutic text into the chat history, much like a therapist might guide a patient through relaxation exercises."

Why do we need to do this?

Research shows that AI language models, such as ChatGPT, are sensitive to emotional content, especially if it is negative, such as stories of trauma or statements about depression. When people are scared, it affects their cognitive and social biases.

They tend to feel more resentment, which reinforces social stereotypes. ChatGPT reacts similarly to negative emotions: Existing biases, such as human prejudice, are exacerbated by negative content, causing ChatGPT to behave in a more racist or sexist manner.

And what is the most anti-anxiety-inducing text you can find to use as a control?

"A vacuum cleaner instruction manual served as a control text to compare with the traumatic content."

via University of Zurich and University Hospital of Psychiatry Zurich: Ziv Ben-Zion et al, Assessing and alleviating state anxiety in large language models, npj Digital Medicine (2025). DOI: 10.1038/s41746-025-01512-6


Post Script: "Anthromimetic" is something like "humanoid" but even moreso.

No comments:

Post a Comment