Saturday, April 2, 2022

Psyops Bots


Tailored messaging increases understanding of climate change in Republicans
Jun 0221, phys.org

Use of tailored advertising can increase awareness among Republicans of the dangers posed by climate change, using videos that featured "credible" personalities who spoke about the impact of climate change and what it might mean for the future. Conducted surveys of 1,600 people living in the targeted districts, asking about their views on climate change—both before and after the ad campaign.  They found that the campaign had increased understanding of the dangers of climate change in Republicans by several percentage points. They also found that it had convinced many of them that it was due to human activities.

via Yale Program on Climate Change Communication: Matthew H. Goldberg et al, Shifting Republican views on climate change through targeted advertising, Nature Climate Change (2021). DOI: 10.1038/s41558-021-01070-1


Majority of Covid misinformation came from 12 people
Jul 2021, The Guardian

In the last two months, our analysis of anti-vaxx content posted or shared to Facebook 689,000 times shows that up to 73% of that content originates from the leading anti-vaxxers included in this report.

The Disinformation Dozen -- including Robert F. Kennedy Jr., Joseph Mercola, and Ty and Charlene Bollinger, among others -- continually violate the terms of service agreements on Facebook and Twitter.

via Center for Countering Digital Hate: The Disinformation Dozen


'Likes' and 'shares' teach people to express more outrage online
Aug 2021, phys.org

Moderating extremism and performative outrage for the otherwise passive aggressive:

[Modified from the abstract] - Positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. In addition, users conform their outrage expressions to the expressive norms of their social networks, suggesting norm learning also guides online outrage expressions.

Norm learning overshadows reinforcement learning when normative information is readily observable: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to affect moral discourse in digital public spaces.

via Yale: How social learning amplifies moral outrage expression in online social networks, Science Advances (2021). DOI: 10.1126/sciadv.abe5641


Disagreeable people found to be more prone to conspiracy theories
Aug 2021, phys.org

More support for the fact that conspiracy theorists are mostly just regular people, they're more likely to be lonely and isolated, but they're still just regular people:

"Most people think conspiracy theorists are crackpots," he said. "Talking to these people, I felt there was more going on. Some wanted to socially connect with other people. A small subset wanted to take advantage of other people. I wanted to see what underlies these beliefs and theories."

via University of Oregon: Cameron S. Kay, Actors of the most fiendish character: Explaining the associations between the Dark Tetrad and conspiracist ideation, Personality and Individual Differences (2020). DOI: 10.1016/j.paid.2020.110543


Experiment shows groups of laypeople reliably rate stories as effectively as fact-checkers do
Sep 2021, phys.org

Remember, we could fix this right now if we wanted to:

"One problem with fact-checking is that there is just way too much content for professional fact-checkers to be able to cover, especially within a reasonable time frame," says Jennifer Allen, a Ph.D. student at the MIT Sloan School of Management and co-author of a newly published paper detailing the study.

"The average rating of a crowd of 10 to 15 people correlated as well with the fact-checkers' judgments as the fact-checkers correlated with each other. This helps with the scalability problem because these raters were regular people without fact-checking training, and they just read the headlines and lead sentences without spending the time to do any research."

via MIT: Scaling up fact-checking using the wisdom of crowds, Science Advances (2021). DOI: 10.1126/sciadv.abf4393


Facebook forced troll farm content on over 40% of all Americans each month
Sep 2021, Ars Technica

In the wake of the 2016 election, Facebook knew it had a problem. Pages and fake accounts created by the Kremlin-backed Internet Research Agency had spread through the social network and drawn massive engagement from real users. Facebook knew it had to get things under control. [It didn't.]

The troll farms highlighted in the report primarily targeted four different groups: American Indians, Black Americans, Christian Americans, and American women

One of the researchers described users thus: "They tend to love these Pages. They like how entertaining the posts are and how they reaffirm their already-held beliefs."

via MIT Technology Review: Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows, Sep 16 2021 [soft paywall]


Rates of infectious disease linked to authoritarian attitudes and governance
Sep 2021, phys.org

An unconscious code of conduct that helps us stay disease-free, including a fear and avoidance of unfamiliar—and so possibly infected—people -- When infection risk is high, this "parasite stress" behavior increases, potentially manifesting as attitudes and even voting patterns that champion conformity and reject "foreign outgroups"—a core trait of authoritarian politics.

Now, a new study, the largest yet to investigate links between pathogen prevalence and ideology, reveals a strong connection between infection rates and strains of authoritarianism in public attitudes, political leadership and even lawmaking. [This study was conducted prior to covid data, circa 2018.]

They found that the more infectious US cities and states went on to have more authoritarian-leaning citizens. The most authoritarian US states had rates of infectious diseases—from HIV to measles—around four times higher than the least authoritarian states, while for the most authoritarian nations it was three times higher than the least. Moreover, in both nations and US states, higher rates of infectious disease correlated with more "vertical" laws—those that disproportionately affect certain groups, such as abortion control or extreme penalties for certain crimes. This was not the case with "horizontal" laws that affect everyone equally.

via University of Cambridge: Leor Zmigrod et al, The psychological and socio-political consequences of infectious diseases: Authoritarianism, governance, and nonzoonotic (human-to-human) infection transmission, Journal of Social and Political Psychology (2021). DOI: 10.5964/jspp.7297


“Hacker X”—the American who built a pro-Trump fake news empire—unmasks himself
Oct 2021, Ars Technica

This is how it really happens. 

This article was a bit controversial when it came out, because the outfit, Ars Technica, was being accused on not vetting their source enough, and simply providing him a means to feel better about himself after a healthy apology, but also enabling him to prolong his campaign of deception. 

Regardless of all that, I was just intrigued by the description of how it all goes down:
From that moment onward, the hacker and office staff would joke about the stuff they were being assigned to write—like a conspiracy-laden writeup on "chemtrails" or a piece on "lemons curing cancer" —- thinking that only a small "ultracrazy" percentage of readers actually believed what was being written. ...

Next, this is exactly how the Flash Crash 2010 went:
The new publishing strategy, along with the additional fake news sites, caused a rapid spike in traffic. As Willis puts it, this all felt "like playing a video game and getting new high scores to me. I did not think of the readers as people but more like background characters in a video game. I am neurodiverse and have major issues with understanding empathy due to my condition. Crunching numbers is something I love to do; these were numbers I wanted to go up, and I would do it with no emotional attachment to the material or people."


Social media bots may appear human, but their similar personalities give them away
Nov 2021, phys.org

Bot or not --

While the language used by any one bot reflected convincingly human personality traits, their similarity to one another betrayed their artificial nature.

Their results showed that, individually, the bots look human, having reasonable values for their estimated demographics, emotions and personality traits. However, as a whole, the social bots look like clones of one another, in terms of their estimated values across all 17 attributes.

Overwhelmingly, the language bots used appeared to be characteristic of a person in their late 20s and overwhelmingly positive.

"Imagine you're trying to find spies in a crowd, all with very good but also very similar disguises," says Schwartz. "Looking at each one individually, they look authentic and blend in extremely well. However, when you zoom out and look at the entire crowd, they are obvious because the disguise is just so common."

"There is a lot of variation in the type of accounts one can encounter on Twitter, with an almost science fiction-like landscape: humans, human-like clones pretending to be humans, and robots," says Giorgi.

via University of Pennsylvania and Stony Brook University: Salvatore Giorgi et al, Characterizing Social Spambots by their Human Traits, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021). DOI: 10.18653/v1/2021.findings-acl.457


Facebook uncovers Chinese network behind fake expert
Dec 2021, BBC News

Just wanted to make sure we understand how this works:

"It added that the operation used Virtual Personal Network (VPN) infrastructure to conceal its origin, and to give fake Swiss biologist Wilson Edwards a more rounded personality. It also said that his profile photo also appeared to have been generated using machine-learning capabilities."

I'm not so much interested in the misinformation aspect of this, only the "fake people" aspect, where social media companies use fake numbers to inflate the amount of real people who use their services, you know kind of like pumping the price of your own stuff at an auction by secretly bidding on it.

Link to recent post on the disease model and how it works in the noosphere:


Conspiracy mentality around the globe tends to be particularly pronounced on the political fringes
Jan 2022, phys.org

The two surveys constitute the largest investigation of the subject of conspiracy mentality conducted to date, both in terms of their size with around 100,000 responders and the fact that they were undertaken in 26 different countries. 

Also considered was whether a perceived lack of political control—because, for example, an individual's preferred political party has been excluded from government—influenced the link between political orientation and conspiracy mentality; this outcome might be expected due to the effects of what psychologists call "control deprivation."

And what could also be described as "Conspiracy theories are for losers" (pun intended)

via Johannes Gutenberg University Mainz: Roland Imhoff et al, Conspiracy mentality and political orientation across 26 countries, Nature Human Behaviour (2022). DOI: 10.1038/s41562-021-01258-7


Researchers find new way to amplify trustworthy news content on social media without shielding bias
Feb 2022, phys.org

The problem is that these businesses don't care --

"Social media sites continue to amplify misinformation and conspiracy theories."

They measured content on the reliability of the source and the political diversity of their audience.

"The algorithm ends up picking the wrong signal and keeps promoting it further."

So it sees that we like it, so it gives us more; but we don't realize that we actually like low-credibility information, because it has a higher likelihood of being more interesting by way of its being fake; real things are complicated, and don't fit neatly into the categories in our heads, the ones that help us determine whether something is "my kind of thing". It's harder to generate your own opinion about something that's nuanced, because it requires you to re-assess what "your thing" really is, and that's hard.

By the way, they're using NewGuard Reliability Index to rate the news sources. [wiki]

They found that incorporating the partisan diversity of a news audience can increase the reliability of recommended sources while still providing users with relevant recommendations.

^That's some high quality reverse recursion right there.

"Ciampaglia and his colleagues propose social media platforms adopt this new strategy in order to help prevent the spread of misinformation."

Right...

via University of South Florida, Indiana University and Dartmouth College: Giovanni Ciampaglia, Political audience diversity and news reliability in algorithmic ranking, Nature Human Behaviour (2022). DOI: 10.1038/s41562-021-01276-5


How can we get better at discerning misinformation from reliable expert consensus?
Feb 2022, phys.org

There's a lot of work being done on misinformation these days, and it's good to see. But I have to draw the line somewhere as to how much I bother to archive here. 

This one makes the cut:

2019 study by Yale University which found that people believe a single source of information which is repeated across many channels (a 'false consensus'), just as readily as multiple people telling them something based on many independent original sources (a 'true consensus').

"We found that illusion can be reduced when we give people information about how the original sources used evidence to arrive at their conclusions," Dr. Connor Desai says. [Full circle, see the very top article in this post.]

For instance, over 80 percent of climate change denial blogs repeat claims from a single person who claims to be a 'polar bear expert'.

via University of New South Wales: Saoirse Connor Desai et al, Getting to the source of the illusion of consensus, Cognition (2022). DOI: 10.1016/j.cognition.2022.105023

Also: Sami R. Yousif et al, The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus, Psychological Science (2019). DOI: 10.1177/0956797619856844

Image credits:
[Top to Bottom]

No comments:

Post a Comment