Sunday, April 29, 2018
DeepFace
Images of celebrities as minors are showing up in datasets used in making AI-generated fake porn.
I kind of hate the Deep___ thing. DeepState, DeepFake, DeepWaste, DeepFace, DeepHate. Mostly because I wrote a book about Deep Learing and Olfaction, and couldn't market it fast enough to catch up with the wave. DeepLate
Still, we can't help here at Network Address but to record all this talk. And especially when it comes to accidentally making child porn.
You might not know that people are making fake porn using photoshop-for-videos. You also might not know that there's a database of images used for importing into these fake videos (facesets, yes). This is a logical extension of the faceswapping we saw almost 10 years ago.
The problem is when you're trawling-up one of these facesets of someone and accidentally pick up some photos of a person when they were a kid. You know, like catching a porpoise, or a plastic bottle, when you're trying to get some tuna.
Except that now your fake porno can get you put in jail because it's child porn. The difference is, if you try to sell someone tuna and it's really a plastic bottle, they'll probably know the difference. But if you're watching a faceswapped porno that was generated by combining thousands of faceshots in thousands of different angles and lighting, but a few of those faces are of the same person but 1 year younger than 18 (because, you know, to a face-recognition algorithm, 18 and 18-1 are so different)...
So not only did you just accidentally make child porn, somebody else just accidentally watched it! You're all sick!
Augmented Reality Faceswap circa 2012
image source: Christian Rex van Minnen
Notes:
Fake Porn Makers Are Worried About Accidentally Making Child Porn
Mar 2017, VICE
Monday, April 16, 2018
Mimetic Security 101
I ran network address through this bias-checking site, and we're all good. Just kidding; it's too popular and I can't get in.
Fake News is really called misinformation, for those who are keeping track of these things. And people are pulling out all the stops trying to get a hold on it. And by 'pulling out all the stops' I mean 'making algorithms do that sh**'.
Data scientist Zach Estela trained a neural network to scan a webpage and determine the type of news it promulgates. He forms the code for his News Bias Classifier using a couple projects already in the business of sniffing out these different information types. One is called OpenSources, and it's curated by humans whose purpose is to squash the Dubiosity Monster that is our current infostream. They read articles and tag them.
These are their taglots:
Fake News, Satire, Extreme Bias, Conspiracy Theory, Rumor Mill, State News, Junk Science, Hate News, Clickbait, Proceed With Caution, Political, Credible*
*Check out their site for their definitions of these types
And check out their methods; Network Address (.blogspot) is a bit disheartened to see this particular method:
Step 1: Title/Domain Analysis.
If “.wordpress” “.com.co” appear in the title -- or any slight variation on a well known website-- this is usually a sign there is a problem.
But the Art teacher in me likes this:
Step 5: Aesthetic Analysis.
Like the style-guide, many fake and questionable news sites utilize very bad design. Are screens are cluttered and they use heavy-handed photo-shopping or born digital images?
Next source used for this fake news detector is a semantic analysis tool called Media Bias Fact Check dedicated to educating the public on media bias and deceptive news practices.
I was drawn to some of their terminology, well, the meta-terminology they use to check news/information sources on the web:
Loaded Language (Words): (also known as loaded terms or emotive language) is wording that attempts to influence an audience by using appeal to emotion or stereotypes. Such wording is also known as high-inference language or language persuasive techniques.
Purr Words: words used to describe something that is favored or loved.
Snarl Words: words used when describing something that a person is against or hates.
Notes:
Sick of Seeing Spam on His Facebook so He Built a Fake News Detector
Mar 2017, VICE
Text Analysis
Sentiment Analysis at Textbox
Fake News Detector at Fakebox
And here's some corresponding references about language and persuasion, from the sentiment analysis page:
Bolinger, Dwight. Language-the loaded weapon: the use and abuse of language today. Routledge, 2014.
Matthews, Jack. “The effect of loaded language on audience comprehension of speeches.” Communications Monographs 14.1-2 (1947): 176-186.
S.I. Hayakawa, Alan Hayakawa. “Language in Thought and Action: Fifth Edition.” New York: Harcourt Brace Jovanovich,
Post Script
Where would we be without Teilhard de Chardin's Noosphere
Labels:
believability,
crediblity,
fabrication,
fake,
language,
misinformation,
Noosphere,
real,
shades of truth
Subscribe to:
Posts (Atom)