Sunday, November 26, 2017

Hiding in Plain Sight

This is obviously an air conditioner

AI image recognition fooled by single pixel change
Nov 2017, BBC

Computers can be fooled into thinking a picture of a taxi is a dog just by changing one pixel, suggests research.

The limitations emerged from Japanese work on ways to fool widely used AI-based image recognition systems.

Many other scientists are now creating "adversarial" example images to expose the fragility of certain types of recognition software.

"There is certainly something strange and interesting going on here, we just don't know exactly what it is yet," he said.

Friday, November 24, 2017

Just Here for the Diamonds


IBM pitches blockchain for cannabis sale
Nov 2017, BBC

Blockchain technology could provide a secure way to track the legal sale of cannabis in Canada, IBM has said.

IBM said: "Blockchain is an ideal mechanism in which BC can transparently capture the history of cannabis through the entire supply chain, ultimately ensuring consumer safety while exerting regulatory control - from seed to sale.

Technology company Everledger is already using the technology to verify the history of diamond transactions.
-BBC


However interesting to see the blockchain and the pot trade finally meet, I am here only looking at the fact that diamonds already have their own currency, as it were. The blockchain, which enables cryptocurrencies, is being used, so it seems, to give every commodity its own currency. Then again, it's not even the currency part that is useful here, but the recordkeeping part.

Diamonds demand the airtight recordkeeping that the blockchain provides. Not only do people want to know if their's is a blood diamond or not, they want to know if it was created in a lab, by humans, or in the Earth, by physics. (In the end, it's hard to argue that there is any difference - the diamonds are identical whether they come from one or the other place.) See what happens.

image source: link

Got Your Semiotics Right Here


Running on Trumpism but without Trump
Nov 2017, BBC

Nothing to do with the president. Rather, this is a great example of what semiotics is. If I'm holding a sign that says "Trump" but what I really mean is "Gillespie", then I'm playing with semiotics. The sign itself is one part, and the meaning another. The sign can say lots of different things, and it can mean lots of different things too. And "no" doesn't always mean "no". And vice versa. And "would you like to come up and see my etchings" doesn't mean lets check out those etchings.

The fact that we can talk about things that aren't even there is what makes us special. They say, although I'm sure it'll be discovered otherwise one day, that if a monkey doesn't see something happen, then it doesn't happen. They can't talk about things that aren't there, but we can. And not only that, we can use signs that stand for one thing and use it to mean something else. So cerebral.

Quantum Next


In the future, everything will be "quantum"

New white paper maps the very real risks that quantum attacks will pose for Bitcoin
Nov 2017, phys.org

Quantum Resistant Coin (QRC)

Bitcoin and other cryptocurrencies will be vulnerable to attacks by quantum computers in as little as 10 years. Such attacks could have a disastrous effect on cryptocurrencies as thieves equipped with quantum computers could easily steal funds without detection, thus leading to a quick erosion of trust in the markets.

image source: link

Thursday, November 23, 2017

Fakes and Bots and Fakes and Bots


We finally realize that the "fake" topic on this site will have to take a rest - because we can no longer keep up with reality. It was fun while it lasted, but the number of headlines with "fake" in the title is just too damn high. 

Fake things have taken the world by storm in 2017, and we're all inundated. And that includes facebook.

In an effort to stop fake news, facebook turns your entire timeline into fake news.

Or, in other news, the digitally native, world-changing social platform attempts to make itself more intelligent by programming-by-semantics (i.e., using the word ""fake" to find fake stories, as opposed to some other more complicated algorithm, you know, like all the other very much more complicated algorithms they already use).


Facebook's fake news experiment backfires
Nov 2017, BBC

A Facebook test that promoted comments containing the word fake to the top of news feeds has been criticised by users.

The trial, which Facebook says has now concluded, aimed to prioritise "comments that indicate disbelief".

It meant feeds from the BBC, the Economist, the New York Times and the Guardian all began with a comment mentioning the word fake.

The test, which was visible only to some users, left many frustrated.

Post Script
For those upset about the Orwellian experiment that is social media 2.0, forget not - we are not consumers but participants/test subjects

Bots Gonna Bot

"ethnically ambiguous guy in white t shirt" returns me this picture of Keanu Reeves in a black t-shirt

Something about elves and the north pole:

Russian troll describes work in the infamous misinformation factory
Nov 2017, NBC News

“These troll farms can produce such a volume of content with hashtags and topics that it distorts what is normal organic conversation,” Clint Watts, senior fellow at the Foreign Policy Research Institute, told NBC News. “It’s called computational propaganda, the volume [at] which they push, false information or true, makes things appear more believable than they might normally be in an organic conversation.”

Writers were separated by floor, with those on the third level blogging to undermine Ukraine and promote Russia. Writers on the first floor — often former professional journalists like Bespalov — created news articles that referred to blog posts written on the third floor. Workers on the third and fourth floor posted comments on the stories and other sites under fake identities, pretending they were from Ukraine. And the marketing team on the second floor weaved all of this misinformation into social media.

Fake Status


Trump's Renoir painting is not real, Chicago museum says
Oct 2017, BBC

My Psy-ops Is Better Than Your Psy-ops


Facebook, Twitter and Google berated by senators on Russia
Nov 2017, BBC

Russian operatives, likely working from St Petersburg, provoked angry Americans to take to the streets [using social media and fake news], a US Senate committee heard on Wednesday.

Lawyers for three technology companies - Facebook, Twitter and Google - were told they were grossly underestimating the scale of the problem.

"You just don't get it," said California Senator Dianne Feinstein.

"What we’re talking about is a cataclysmic change. What we’re talking about is the beginning of cyber-warfare."

What Was I Just Computing Again


Forget about it: A material that mimics the brain
Oct 2017, phys.org

Lattice breathing, electronic forgetting, and proton doping, oh my.

It's hard to find a material that forgets, but they've found one (U.S. Department of Energy's (DOE) Argonne National Laboratory, in collaboration with others), and now they're trying to make a better computer by making it forget, because we forget, and we are the ultimate computing machine, despite what you might think these days.

Post Script - On Forgetting Again

The internet is rotting - Thousands of sites go offline each year
July 2019, phys.org

How do we remove biases in AI systems? Start by teaching them selective amnesia
Mar 2020, phys.org
Jaiswal and co-author Daniel Moyer, Ph.D., developed the adversarial forgetting approach, which teaches deep learning models to disregard specific, unwanted data factors so that the results they produce are unbiased and more accurate.
...
Deep learning algorithms are great at learning things, but it's more difficult to make sure that the algorithms don't learn certain things. Developing algorithms is a very data-driven process, and data tends to contain biases.


Moral Outrage in the Digital Age


Haha this graph doesn't even mean anything, numerically, at least. Narratively, however, it gets the point across - viral emergence equals fast death.

Yale assistant professor Molly Crockett looks at how digital media changes the expression of moral outrage and its social consequences, and this WIRED article talks about her on the subject of getting mad online.

It's been a good year for speaking out against sexual harrassment and abuse. Maybe it's because the president of the USA doesn't deny that he is an abusive misogynist. Maybe it's because it's finally time for us to talk about how (typically) women have to put up with forms of social behavior more appropriate for animals than people. Regardless, it's definitely time to start talking about experiences of harrassment and abuse on the biggest megaphone humankind has ever created, Twitter. This comes in the form of the Me Too campaign, where people tell their stories too. That's where this article stems from.

Growing up in a hyperlinked world makes you suspicious of anything viral. Doesn't matter if it's good or bad, for or against; if it's viral, it's suspicious. Has this been engineered? Is there something inherent in the subject that makes it so susceptible to hyperlinked amplification? Is it just probability and statistics?

Why did Kony 2012 happen? At one point it was the most viral video ever. Ever. Oprah had something to do with that, just saying, in case you're looking for ingredients for a viral pie.

Point being, just because something's viral doesn't mean it's doing a good thing for the movement or campaign that it's a part of. Professor Molly Crockett, and the article below, explain this:

Me Too and the Problem with Viral Outrage
Oct 2017, Jessi Hempel for WIRED

It’s often the case that the people or organizations you shame “publicly” via social media will never see the criticism at all. Your social audience is generally a group of like-minded people—those who have already opted in to your filter bubble. Or as Crockett writes: “Shaming a stranger on a deserted street is far riskier than joining a Twitter mob of thousands.”

One of the chief reasons we decry the actions of others digitally is for our own reputational benefit—so those like-minded people will like us even more.

“People are less likely to spend money on punishing unfairness when they are given the opportunity to express their outrage via written messages instead,” she writes.

Post Script:
Can't help but raise Slavoj Zizek's ideas about consumerism and charity - how we engage in charitable acts as a way to morally license ourselves, to permit ourselves to participate in an economy and way of life that by its nature takes liberties away from others. Weak as it may be, there is a link here between this pursuit of moral license. We're all looking for qualifications for our actions (because every one of us knows that we can be called out any day for what we do, right in front of the whole world all at once). White folks want a license for black folks. Men want a permit that gets them into the women's club. The cops want to hi-five the civillians. The surer the gaurantee that you'll be let into the club, the greater the motivation to participate in any particular movement.

Slavoj Zizek on Consumerism and Charity: "First as Tragedy, Then as Farce"
https://www.youtube.com/watch?v=hpAMbpQ8J7g 

Notes:
Molly Crockett's TED Talk about morality and decision-making:
https://www.ted.com/talks/molly_crockett_beware_neuro_bunk