Friday, September 14, 2012

SocialBots


video still, Tim Hwang, 2012
I'm Not a Real Friend, But I Play One on the Internet
Tim Hwang, HOPE#9, July 2012

here I am simply reposting, as the author has done a great job of summarizing the talk (and with some added criticisms/further reading)

The Center for Internet and Society, Stanford Law School
Robotics and the Law: Chronicling robotics programming at Stanford Law School
EXPERIMENTS WITH SOCIALBOTS
JULY 22, 2012 • BY WENDY M. GROSSMAN

At the east coast hacker conference HOPE 9 the weekend of July 13-15, 2012, Pacific Social Architecting Corporation’s Tim Hwang reported on experiments the company has been conducting with socialbots. That is, bot accounts deployed on social networks like Twitter and Facebooks for the purpose of studying how they can be used to influence and alter the behavior and social landscapes of other users. Their November 2011 paper (PDF) gives some of the background.

The highlights:

- Early in 2011, Hwang conducted a competition to study socialbots. Teams scored points by getting their bot-controlled Twitter accounts (and any number of supporting bots) to make connections with and elicit social behavior from an unsuspecting cluster of 500 online users. Teams got +1 point for mutual follows; +3 points for social responses; and -15 if the account was detected and killed by Twitter. The New Zealand team won with bland, encouraging statements; no AI was involved but the bot’s responses were encouraging enough for people to talk to it. A second entrant used Amazon’s Mechanical Turk; another user could ask it a direct question and it would forward it to the MT humans and return the answer. A third effort redirected tweets randomly between unconnected groups of users talking about the same topics.

- A bot can get good, human responses to “Are you a bot” by asking that question of human users and reusing the responses.

- In the interests of making bots more credible (as inhabited by humans) it helped for them to take enough hours off to seem to go to sleep like humans.

- Many bot personalities tend to fall apart in one-to-one communication, so they wouldn’t fare well in traditional AI/Turing test conditions – but online norms help them seem more credible.

- Governments are beginning to get into this. The researchers found bots active promoting both sides of the most recent Mexican election. Newt Gingrich claimed the number of Twitter followers he had showed that he had a grass roots following on the Internet; however, an aide who had quit disclosed that most of his followers were fakes, boosted by blank accounts created by a company hired for the purpose. Experienced users are pretty quick to spot fake accounts; will we need crowd-based systems to protect less sophisticated users (like collaborative spam-reporting systems)? But this is only true of the rather crude bots we have so far. What about more sophisticated ones? Hwang believes the bigger problem will come when governments adopt the much more difficult-to-spot strategy of using bots to “shape the social universe around them” rather than to censor.

Hwang noted the ethical quandary raised by people beginning to flirt with the bot: how long should the bot go on? Should it shut down? What if the human feels rejected? I think the ethical quandary ought to have started much earlier; although the experiment was framed in terms of experimenting with bots in reality the teams were experimenting on real people, even if it was only for two weeks and on Twitter.

Hwang is in the right place when he asks “Does it presage a world in which people design systems to influence networks this way?” It’s a good question, as is the question of how to defend against this kind of thing. But it seems to me typical of the constant reinvention of the computer industry that Hwang had not read – or heard of – Andrew Leonard’s 1997 book Bots: The Origin of New Species, which reports on the prior art in this field, experiments with software bots interacting with people through the late 1990s (I need to reread it myself). So perhaps one of the first Robots, Freedom, and Privacy dangers is the failure to study past experiments in the interests of avoiding the obvious ethical issues that have already been uncovered.
http://blogs.law.stanford.edu/robotics/2012/07/22/experiments-with-socialbots/

PacSocial: Field Test Report
Max Nanis, Ian Pearce, Tim Hwang
November 15, 2011
http://www.pacsocial.com/files/pacsocial_field_test_report_2011-11-15.pdf


Bots: The Origin of a New Species
Andrew Leonard 1998


New Algorithm Can Spot the Bots in Your Twitter Feed
Lee Simmons, Wired, 10.17.13

Computer scientists develop tool for uncovering bot-controlled Twitter accounts
2014, phys.org
http://phys.org/news/2014-05-scientists-tool-uncovering-bot-controlled-twitter.html

"Part of the motivation of our research is that we don't really know how bad the problem is in quantitative terms," said Fil Menczer, the informatics and computer science professor who directs IU's Center for Complex Networks and Systems Research, where the new work is being conducted as part of the information diffusion research project called Truthy. "Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander."


No comments:

Post a Comment