Psychedelic Artist Here |
Text rules the internet. We know that. (It's why you can't google smells.)
But humans aren't born to read, and if we had the choice, we would much rather talk to things than write to them. This is why speech recognition has become such a big deal.
For a while there, voice rec was in trouble. Kind of like how autocorrect was in trouble, maybe you remember that more.
Then came all these neural nets and backward propagation, and now we have mindbots playing video games better than humans. Machine learning artificial intelligence is on a winning streak these days. But it's funny that with all this talk about biased algorithms, we often fail to recognize one group of people that are consistently being left out, even from the most basic of services offered by our digitally-assisted exocortices.
The people left out are those who are hard of hearing, who cannot see well, or who speak with impediments. Your phone may be great at deciphering your drunk-ass request for a cab at 9 p.m. on a Tuesday, but that's because it's been trained on a few happy hour voices just to deal with that particular instance.*
Your phone has not been trained to talk to you after your face underwent reconstructive surgery, messing with your speech. Or when you can't speak at all because you can't hear. The problem is that speech recognition technology hasn't been trained on sign language.
Deep Forest Deep Dream Mandala by Nora Berg
|
The way he does it is really clever though, as he uses a publicly available neural net to (literally?) teach the network sign language, whereupon he further develops that recognized sign language into text, and then ties it all to a text-to-speech program, which can then finally talk to Alexa.
What I want to know is, how long do I have to get this thing to watch me before it can recognize my thoughts and turn them into words so that I can finally be one with the internet??
Sign-language hack lets Amazon Alexa respond to gestures
July 2018, BBC News
Post Script
The Alcohol Language Corpus (of drunk speech)
Between 2007 and 2009, linguistic researchers from the Bavarian Archive for Speech Signals at the Ludwig Maximilian University of Munich and the Institute of Legal Medicine in Munich in Germany convinced 162 men and women to get drunk [and talk Drunken John into a voice recorder].
-Fast Company
Post Post Script
I'm sure I concur with everyone else who is saying that Alex Grey is the only artist where Deep Dream makes his work look worse! Still, how is it that I can't find more of it online? Also, can't someone train a network on his paintings instead of the corpus they used for Deep Dream?
Alex Grey Meets Deep Dream |
Alex Grey As Himself |
Post Post Post
Accidental robot-hole. I should know way more about neural style transfers, which came out last year and as an extension of the Deep Dream project.
Style transfer is where you anti(?)train a neural net to overfit for a particular style, and then fit that style onto a new image. It makes way more sense when you see it.
Here, a painting of Napoleon, trained on Raphael's School of Athens:
Meta Napoleon Bonapart, unknown origin
|
Deep Escher in Athens, unknown origin
|
Neil Degrasse Tyson x Kandinsky
|
Brad Pitt x Duchamp(?)
|
Abstract Cat |
Here's a place that will do it for you, but they have their own pre-designed style templates (called pastiches) - Deep Art
There's also a few apps that do it, Prisma is one.
And finally, don't forget who started this all:
Abstraction of a Cow, Theo van Doesburg, circa 1900
|
Abstraction of a Tree, curated from Piet Mondrian, circa 1900
|