Thursday, June 24, 2021

Now Hiring

AKA The Bot Market's Hot

ISIS still evading detection on Facebook report says
Jul 2020, phys.org

To amass digital territory is their purpose. It's been a while since I thought about the internet as real estate, or rather our eyeballs as real estate...
The Institute for Strategic Dialogue (ISD) tracked 288 Facebook accounts linked to a particular ISIS network over three months.

The researchers believe that at the centre of the network was one user who managed around a third (90 out of 288) of the Facebook profiles.

This was accomplished by generating real North American phone numbers and looking for associated Facebook accounts.

If it found a match it would request a re-set code to be sent to the phone number, so it could lock out the original account holder and use the Facebook profile to spread content.

The researchers say another key to the survival of ISIS content on the platform was the way in which ISIS supporters have learned to modify their content to evade controls. This included:
  • Breaking up text and using strange punctuation to evade any tools which would search for key words
  • Blurring ISIS branding, or adding Facebook's own video effects
  • Adding the branding of mainstream news outlets over the top of ISIS content
Image credit: Adeen Flinker NYU School of Medicine

Instagram removes hundreds of accounts tied to username hacking
Feb 2021, Reuters

But imagine that we're already semibots, and buying and selling each other.

Fake Amazon reviews 'being sold in bulk' online
Feb 2021, BBC News

It's the Amazon Marketplace, you can buy anything here:
The cost of fake reviews -  £5 each to start. These included "packages" of fake reviews available for sellers to buy for about £15 individually, as well as bulk packages starting at £620 for 50 reviews and going up to £8,000 for 1,000.
Researchers study online 'pseudo-reviews' that mock products
Apr 2021, phys.org

New fakes for the new world. Pseudo-fakes and quasi-forgeries. Pseudo reviews mock the product, like this example:
"I was able to purchase this amazing television with an FHA loan (30 year fixed-rate w/ 4.25% APR) and only 3.5% down. This is, hands down, the best decision I've ever made. And the box it came in is incredibly roomy too, which is a huge bonus, because I live in it now."
Apparently too many pseudo reviews can sway purchaser intent. And if a particular platform becomes infested with them, they can lose credibility altogether. 
via University of Akron: Federico de Gregorio et al. Pseudo-reviews: Conceptualization and consumer effects of a new online phenomenon, Computers in Human Behavior (2020). DOI: 10.1016/j.chb.2020.106545
Omegle: 'I'm being used as sex-baiting bot' on video chat site
Apr 2021, BBC News

God-level fake operation:

Kid goes to a website where you meet random strangers. An older woman convinces him to take off his clothes and go full Discovery Channel on himself. He does it again with other people. He quits the site, but then one day a year later, he goes back on and gets matched to ... himself. Someone uses his recorded video, overdubbed with typed conversations created in real time, in order to get other people to join. Who knows, maybe that's how he got convinced in the first place. 

"It was like a fully advanced system with different video sequences of me doing different stuff."

One day, the people running this "advanced system" won't be people or even a corporation, but an intelligent entity -- today we call it AI, but it will by then be a combination of people and computers all mashed together so you can't really tell who's who anymore. 

Army of fake fans boosts China’s messaging on Twitter
May 2021, AP News

This isn't really news, I added this one just for posterity.
Good term though: "Counterfeiting Consensus"

Mass scale manipulation of Twitter Trends discovered
June 2021, phys.org
"We found that 47% of local trends in Turkey and 20% of global trends are fake, created from scratch by bots. Between June 2015 and September 2019, we uncovered 108,000 bot accounts involved, the biggest bot dataset reported in a single paper. Our research is the first to uncover the manipulation of Twitter Trends at this scale," Elmas continued. (But don't forget to check how they define bot activity, as this can differ a lot.)
via  Ecole Polytechnique Federale de Lausanne: Ephemeral Astroturfing Attacks: The Case of Fake Twitter Trends. arXiv:1910.07783v4 [cs.CR] arxiv.org/abs/1910.07783
Conservatives more susceptible to believing falsehoods
Jun 2021, phys.org

Sorry guys:
Researchers found that liberals and conservatives in the United States both tended to believe claims that promoted their political views, but that this more often led conservatives to accept falsehoods while rejecting truths.

"But the deck is stacked against conservatives because there is so much more misinformation that supports conservative positions. As a result, conservatives are more often led astray."

Although the information environment was the primary reason conservatives were susceptible to misinformation, it may not be the only one.

Results showed that even when the information environment was taken into account, conservatives were slightly more likely to hold misperceptions than were liberals.

"It is difficult to say why that is," Garrett said. "We can't explain the finding with our data alone."

Conservatives also showed a stronger "truth bias," meaning that they were more likely to say that all the claims they were asked about were true.

"We show that the media environment is shaping people's ability to do this very basic, fundamental task."
via Ohio State University: R.K. Garrett el al., "Conservatives' susceptibility to political misperceptions," Science Advances (2021).

Post Script (Don't even try it)
The double-down is real: Correcting online falsehoods might make matters worse
May 2021, phys.org
Not only is misinformation increasing online, but attempting to correct it politely on Twitter can have negative consequences, leading to even less-accurate tweets and more toxicity from the people being corrected, according to a new study co-authored by a group of MIT scholars.
...
On Twitter, people seem to spend a relatively long time crafting primary tweets, and little time making decisions about retweets.
via Massachusetts Institute of Technology: Mohsen Mosleh et al, Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021). DOI: 10.1145/3411764.3445642
Post Post Script (Then again)
Artificial intelligence system could help counter the spread of disinformation
May 2021, phys.org

In total, they compiled 28 million Twitter posts from 1 million accounts, during the 2017 French elections, and found bots with 96% accuracy. Note that the way you define a bot is important in these studies. But also note that they've gone beyond using mere activity levels as their metric, they also look at how each bot causes the network as a whole to change and amplify messages, and at the bot's behaviors such as whether they interact with foreign media or what language they use.

Also, this approach looks at not just bots but the real people too, and how they all impact the network as a whole. 

via MIT Lincoln Laboratory's Artificial Intelligence Software Architectures and Algorithms Group: Steven T. Smith et al, Automatic detection of influential actors in disinformation networks, Proceedings of the National Academy of Sciences (2021). http://dx.doi.org/10.1073/pnas.2011216118DOI: 10.1073/pnas.2011216118

No comments:

Post a Comment