Sunday, May 20, 2018
It looks like the robot brains are beating the sensory prostheses. Telescopes and microscopes are way better at seeing than humans and our smartphones. But we may not need to put these advanced lenses in our phones or our future robots in order to make them superhuman. Instead, we need to fill them with fuzzier brains.
Artificial intelligence may seem like a hyper-concise, over-literal, braniac, but not these days. The new generation of AI, known simply as deep learning, is the opposite of this. It is less like a calculator and more like a guess. (Although technically it's both). It favors approximation over concision.
Relative to us humans, however, its results are more concise than we could ever attain. A research group via UCLA has outfitted regular lenses found on a smartphone with a 3D-printed microscope attachment and an AI that makes a really good, phenomenally good guess at what it sees. Their invention gives us back an image of the same precision as a lab-grade microscope.
Their fuzzy AI brain "learns" how to see under high resolution by being fed both images taken with the regular smartphone, and with lab-grade microscopes. Using thousands of examples, the brain compares the one to the other, and eventually learns how we get from the one to the other - if I give you this crappy fuzzy image, how do you make it into that sharp-shaped product? It learns how to do that, using algorithms that we don't program (the brain programs itself).
It's only superficially ironic that the loose AI analogy of 'blurry vision' is used by these deep learning techniques to see in high res. The real story here is that we're using the brains, or the software, of our robots to liberate us from the hardware restrictions.
Deep learning transforms smartphone microscopes into laboratory-grade devices
Apr 2018, phys.org
Yair Rivenson et al. Deep Learning Enhanced Mobile-Phone Microscopy, ACS Photonics (2018). DOI: 10.1021/acsphotonics.8b00146
Provided by: University of California, Los Angeles
Saturday, May 19, 2018
For those who are interested in computational social thermodynamics, the discussion is getting interesting. There's a lot going on at the TED stage this year.
A guru of sorts for all things internet, Jaron Larnier has some opinions about the social implications of a highly-automated social ecosystem:
"In the beginning it was cute but as computers became more efficient and algorithms got better, it can no longer be called advertising any more - it has turned into behaviour modification." -BBC
Isn't all advertising a form of behavior modification? It tries to get you to buy something. In this case, we should not get confused by the two kinds of advertising going on when we use a service like Facebook or Google.
We are being shown things we can pay money to get, be they food-delivery or sneakers. These third-parties are advertising to us via a web service about their product. As much as this might modify our behavior, we don't see it as behavior modification until we go to buy the thing advertised. Then it's behavior modification. Before then, I'm not sure what to call it. Mind-control.
There is another kind of advertising, however, and it is the less obvious one. This is the advertising of the service via the service itself. This is the form of advertising that we don't even notice, and that's what makes it have that potential for doing bad things for society.
Any "free" web service must have built into it a system by which the users' behaviors are modified to increase the chance of their using the service again. If the purpose of a service is ostensibly to help people connect with each other, then guess what - that system is going to work to put you in the way of people most like you, because that's who you're more likely to connect with.
Over the large scale, this defeats the underlying nature of social networks that keeps a superentity like Facebook alive (because of one big web, we get a bunch of separate little webs). But on a small enough scale, it keeps you hooked up and tapped into the people you are most likey to interact with. It shows you pictures like the ones you've already liked, because you're more likely to like those. It tells you about people who think and talk like you, because you're more likely to like what they say if it's similar to what you already say, hence you use the program more. (Nowadays this is called an Echo Chamber or a Filter Bubble and it's become a pretty common idea.)
Problem for us people is that this is not how society works. You can't only socialize with people like you. This is where the echo chamber or filter bubble comes from. The network gets chiseled finer and finer until it's basically a mirror of your self (although we should note, this is your outward self, the one that lives out there in society, not the inner self...but which self it the real one, I'm not here to dedice that). This is where, across much greater scales, polarization of a society comes from. Less grey area, less room for debate, more need for absolutes. And that means less reality, because reality is anything but absolute.
Back to the point Jaron Lanier was trying to make, maybe reality-modification is what we should be discussing here, not behavior modification.
Far out, I'm thinking we will eventually refuse to live in each other's worlds. We will be forced to run simulations of our own preferred realities and make them compete with each other for the primary shared reality. Kind of an idea like 'all news is fake news unless it's your news'. Ultimately, we're already doing this. Watch it unfold.
Facebook and Google need ad-free options says Jaron Lanier
Saturday, May 5, 2018
A pleasure is full grown only when it is remembered. You are speaking, [human], as if the pleasure were one thing and th ememory another. It's all one thing. [...] What you call remembering is the last part of the pleasure. [...] The other is ony the beginning of it. When you and I met, the meeting was over very shortly, it was nothing. But still we know very little about it. What it will be as I remember it as I lie down to die, what it makes in me all my days till then - that is the real meeting. The other is only the beginning of it. You say you have poets in this world. Do they not teach you this? [...] And indeed, the poem is a good example.
For the most splendid line becomes fully splendid only by means of all the lines after it; if you went back to it you would find it less splendid than you thought. You would kill it. I mean in a poem. (p73)
Out of the Silent Planet
Uroboros Engraving by Natale Bonifacio
Delle allvsioni, imprese, et emblemi del Sig. Principio Fabricii da Teramo sopra la vita, opere, et attioni di Gregorio XIII pontefice massimo (1588)
Word of the day:
Ursula K. LeGuin coined this word in her seminal work The Left Hand of Darkness in 1969.
It is a contraction of "answerable", but other scifi writers use it to call any device for instantaneous or at least faster-than-light communication.
It's also the name of a software for platform automation, which is drowning your LeGuin results.
The Left Hand of Darkness
Ursula K. LeGuin, 1969
Sunday, April 29, 2018
Images of celebrities as minors are showing up in datasets used in making AI-generated fake porn.
I kind of hate the Deep___ thing. DeepState, DeepFake, DeepWaste, DeepFace, DeepHate. Mostly because I wrote a book about Deep Learing and Olfaction, and couldn't market it fast enough to catch up with the wave. DeepLate
Still, we can't help here at Network Address but to record all this talk. And especially when it comes to accidentally making child porn.
You might not know that people are making fake porn using photoshop-for-videos. You also might not know that there's a database of images used for importing into these fake videos (facesets, yes). This is a logical extension of the faceswapping we saw almost 10 years ago.
The problem is when you're trawling-up one of these facesets of someone and accidentally pick up some photos of a person when they were a kid. You know, like catching a porpoise, or a plastic bottle, when you're trying to get some tuna.
Except that now your fake porno can get you put in jail because it's child porn. The difference is, if you try to sell someone tuna and it's really a plastic bottle, they'll probably know the difference. But if you're watching a faceswapped porno that was generated by combining thousands of faceshots in thousands of different angles and lighting, but a few of those faces are of the same person but 1 year younger than 18 (because, you know, to a face-recognition algorithm, 18 and 18-1 are so different)...
So not only did you just accidentally make child porn, somebody else just accidentally watched it! You're all sick!
Augmented Reality Faceswap circa 2012
image source: Christian Rex van Minnen
Fake Porn Makers Are Worried About Accidentally Making Child Porn
Mar 2017, VICE
Monday, April 16, 2018
I ran network address through this bias-checking site, and we're all good. Just kidding; it's too popular and I can't get in.
Fake News is really called misinformation, for those who are keeping track of these things. And people are pulling out all the stops trying to get a hold on it. And by 'pulling out all the stops' I mean 'making algorithms do that sh**'.
Data scientist Zach Estela trained a neural network to scan a webpage and determine the type of news it promulgates. He forms the code for his News Bias Classifier using a couple projects already in the business of sniffing out these different information types. One is called OpenSources, and it's curated by humans whose purpose is to squash the Dubiosity Monster that is our current infostream. They read articles and tag them.
These are their taglots:
Fake News, Satire, Extreme Bias, Conspiracy Theory, Rumor Mill, State News, Junk Science, Hate News, Clickbait, Proceed With Caution, Political, Credible*
*Check out their site for their definitions of these types
And check out their methods; Network Address (.blogspot) is a bit disheartened to see this particular method:
Step 1: Title/Domain Analysis.
If “.wordpress” “.com.co” appear in the title -- or any slight variation on a well known website-- this is usually a sign there is a problem.
But the Art teacher in me likes this:
Step 5: Aesthetic Analysis.
Like the style-guide, many fake and questionable news sites utilize very bad design. Are screens are cluttered and they use heavy-handed photo-shopping or born digital images?
Next source used for this fake news detector is a semantic analysis tool called Media Bias Fact Check dedicated to educating the public on media bias and deceptive news practices.
I was drawn to some of their terminology, well, the meta-terminology they use to check news/information sources on the web:
Loaded Language (Words): (also known as loaded terms or emotive language) is wording that attempts to influence an audience by using appeal to emotion or stereotypes. Such wording is also known as high-inference language or language persuasive techniques.
Purr Words: words used to describe something that is favored or loved.
Snarl Words: words used when describing something that a person is against or hates.
Sick of Seeing Spam on His Facebook so He Built a Fake News Detector
Mar 2017, VICE
Sentiment Analysis at Textbox
Fake News Detector at Fakebox
And here's some corresponding references about language and persuasion, from the sentiment analysis page:
Bolinger, Dwight. Language-the loaded weapon: the use and abuse of language today. Routledge, 2014.
Matthews, Jack. “The effect of loaded language on audience comprehension of speeches.” Communications Monographs 14.1-2 (1947): 176-186.
S.I. Hayakawa, Alan Hayakawa. “Language in Thought and Action: Fifth Edition.” New York: Harcourt Brace Jovanovich,
Where would we be without Teilhard de Chardin's Noosphere
Thursday, March 29, 2018
I'm reading The Left Hand of Darkness, and for those who don't know, the author Ursula K LeGuin is one of the most important science fiction writers ever, and for more than just this book, but also for some things in this book.
Possiblities in gender and sexuality are a relatively common topic in scifi, but still her description of this foreign race of people called the Gethenians is really quite shocking, especially in this day (2018/gender identity/etc).
She gets really descriptive about it, but I'll be brief. Everyone in Gethen is an ambisexual androgyne - neither male nor female, yet capable of become both. They are, compared to humans (Terrans in scifi lingo), asexual, but in this story, they are NOT asexual one week of every month. At this time they are in "kemmer" (basically like being in "heat"). And when in kemmer, anything goes, and all you do is go somewhere with a lot of other people in kemmer too, and you put your hand in theirs, and as long as both parties consent, voila, sex! (and gender, see below).
Funny thing is, during today's kemmer, you might turn into a woman, and next month's, a man, and you never know which it will be which until the moment it happens. And the partner is always the opposite. And so it's totally normal for people to both give birth to and sire children all the same, and at different times in their lives.
That's all I'll say about LeGuin's genius, because, as usual, truth is stranger than fiction, and we can now turn to nature for another example of disintegrating delineations of gender.
Behold, the anglerfish, and forget what you heard about checking off the "other" box on your birth certificate:
First-ever observations of a living anglerfish, a female with her tiny mate, coupled for life
Mar 2018, phys.org
Once a male finds a female, a seemingly impossible task in the vast open space of the deep sea, he bites onto her body, the tissues and circulatory systems of the two fuse, and he is fed by nutrients received through her blood. The male becomes a "sexual parasite," hanging on for the rest of his life and unable to free himself, fertilizing the eggs produced by the female. The male completely loses his individuality and the couple becomes a single functioning organism.