Wednesday, January 23, 2019
Fauxbots
Robots are already taking over.
Human computer programmers are influencing memetic propagation algorithms which are influencing human social media users. Another way of articulating this is to say that we are outfitting ourselves with a cybernetic limbic system. I'm channeling both Elon Musk's far-out interview about artificial emotional intelligence ecologies and Jaron Lanier's recent behavior modification talks (with a bit of Robert Sapolsky's Human Behavior lectures).
As if it wasn't a surprise, news has it that a script designed to spread information is better than us at doing just that. Coupled with the fact that misinformation spreads faster than factual information, we can easily see what a bad idea it was to offload water-cooler-style information-spreading to an algorithm optimized to sell consumer goods and services to a hypertargeted audience.
This is not to talk trash about technology, or social media, or even human nature; there's plenty of good things to come of all this. I mean, ALS, right?
It is an alert, however, that the things that we fear from far away (robot overlords etc), they tend to look a lot different by the time they get right under our nose. And this is a great example. Note that researcher Tim Hwang was doing work with SocialBots back in 2012, when he ran a competition to see who could influence the most people on Twitter with an automated fauxbot. By the end of the competition we learned two things - 1.It's not hard at all to make people think you're a person when you're not, and 2.It's very very ethically questionable to do these kinds of experiments.
Not that it matters much. Using a social media platform waives your right to be free from experimentation.
In what I guess I will call traditional research, if your experiment involves people, you have to take some ethics classes, and your plan has to pass a group of people who's job it is to make sure you're not doing anything ethically dubious to your subjects. In very simple terms, you're not supposed to do harm to your subjects (only people though, sorry animals).
But when you participate in social media, you're willfully participating in the experiment that is the platform's digital ecosystem (and because the programs aren't programming themselves, not yet, the platform's corporate culture has influence here as well). The whole thing is an experiment from the moment you log in.
So what happens when you are that poor schmuck who fell in love with Tim Hwang's socialbot and then got his heart broken when the competition ended? Tough shit?
Or when you realize the "woman" you've been chatting up for the past two days is really a feature designed by the dating app itself to keep you engaged at the most opportune moments; what is a melted snowflake to do? Who to sue?
Listening to the news last night, two people are talking about results from a recent Facebook experiment that showed we can make people act nicer to each other through pretty simple and very subtle programming changes. Or make people happier by showing them happier news in their feed. (It's called a feed for f's sake.)
Is a social media platform responsible for the death of a teen who may have been only a few happy newsclips away from making that final decision?
Nope. Not right now at least. Not until we face the hard facts about how powerful it is to effect mass population manipulation with only minor changes in program code.
Post Script:
The chances you've been involved in one of these experiments already? 100%
"Low-credibility content" is the new fake news, and "auto amplification" is the act of spreading it.
The reason automated amplification works is because of herd mentality. A great experiment I read recently in Geoffrey West's book Scale - a researcher took 100 crowdfunding campaigns that had been at $0 for a minute already and donated $1 to half of them. The ones who got no donation stayed at zero, and the others gained at least some extra donations. There are dozens of other names for this such as the law of accumulative advantage, the Matthew Effect, and the rich get richer.
Notes:
Study: It only takes a few seconds for bots to spread misinformation
Nov 2018, Ars Technica
The spread of low-credibility content by social bots
Nature, 2018
The spread of true and false news online
Science, 2018
The spread of true and false information online
MIT, 2018
I'm Not a Real Friend, But I Play One on the Internet
Tim Hwang, HOPE#9, July 2012
SocialBots
Network Address, 2012
Institutional Review Board
also known as an independent ethics committee, ethical review board, or research ethics board, is a type of committee that applies research ethics by reviewing the methods proposed for research to ensure that they are ethical
Post Post Script Script:
Looks like something is real popular in the news right now:
On Twitter, limited number of characters spreading fake info
Jan 2019, phys.org
Washington fears new threat from 'deepfake' videos
Jan 2019, The Hill
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment