Saturday, July 27, 2019

Empathic Intelligence and Digital Feelings


Making progress in the transition to cybernetic psycho-oppression, the Endocorporeal Datavore has released an ethical tribune to tame the wave of intelligentities at our doorstep.

In other words:
Google announces AI ethics panel
Mar 2019, BBC News
 
In a highly-cited thesis entitled Robots Should Be Slaves, Ms Bryson argued against the trend of treating robots like people.
"In humanising them," she wrote, "we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility.
-Joanna Bryson
On the face of it, I think in the complete opposite direction. I think we need to be empathic to robots, because we already see them as people. Or to be more precise, we already personify them. Granted, it's the same as we do with a washing machine or even a car, but with robots it's different. We are making them in our own image, after all. We couldn't help that if we tried.

When that public service robot (DC?) fell into the pond, the articles joked that he offed himself on the first day on the job. Too stressful. How is that a good thing to model to those who look up to us, to poke fun at someone who has killed themselves. We know it's a robot, but kids don't know that. They don't know what anything means because we have to tell them first, to show them first. And if we treat things like crap, whether it's the washing machine or a pair of flip-flops, then they will treat things like crap too.

Not to mention, we're only a couple generations away from being robots ourselves. Prosthetic retinas, cochlear implants, pacemakers, exoskeletons. Did anyone not buy into the 'your cell phone is your exocortex' line by Jason Silva? We're already robots to some degree.

It is definitely good to hear what sounds to me like totally wrong and crazy talk, because this is an ethics panel, and you want to hear everything out there. We're still in such early phases of this, we want to shape the conversation to be as big as possible at this point. And I am pretty sure that Bryson's argument is one that needs to be digested at length, and not just from a small bite (she was chosen to be on this panel for a reason), so I look forward to getting more into it.

And while we're on the topic of empathic intelligence and robot feelings:
Britain's 'bullied' chatbots fight back
Mar 2019, BBC News

Service bots (chatbots) get abused. And I feel bad for them as I read this article.

Those who do research on these kinds of things say that humans are never-not going to test the boundaries. Like a child with their parent or a student and their teacher, we will always test the limits of another person. We do it to everyone in every relationship, but it's the asymmetrical ones where it's most evident (where one person has way more power than the other).

In the case of a chatbot, we also just want to test the believability. Sure they may have programmed this thing to help me return a defective dehumidifier, but did they program it to tell me to f*** off when I give it a hard time? How real is this thing?

Plum, a service chatbot who wants us to think xe's very real, is now programmed to respond: "I might be a robot but I have digital feelings. Please don't swear."

Digital. Feelings.

No comments:

Post a Comment