Sunday, September 16, 2018

White Out


Political Correction Fluid (ie White Out).

I always thought it was weird; glad I'm not the only one. When I decided to slap together a barebones Asus ten years ago, I was surprised to find the configuration referred to as master-slave. I didn't think too much of it at the time, but whenever it comes up, a part of me does wonder why we don't put this higher on the politically correct priority list, like somewhere above he/she/xe.

Like is the terminology we use for computers hundreds of years old or something, where we just keep using the same words even though their original meaning is completely lost? (Oh wait, but is that part of the hegemony of the privileged, ie, it's so ubiquitous it's invisible?)

Turns out that yes, these terms do come from a place long before Mr. Babbage's scribbles. Well, not too long before that - they're from the world of motor control, when we started to make really complicated machines; maybe it goes all the way back to watchmaking. But it stops here, 2018.

As far back as 2003, folks in California were asking manufacturers to come up with a better alternative that doesn't make us all really uncomfortable. Maybe we're not all really uncomfortable. Then again, maybe the people who are uncomfortable don't really matter. Or they don't matter enough, in comparison to how important it is to maintain the lexical inertia of engineering.

Regardless, one of the main programming languages of our time has now decided to rearrange the playbook, or the instruction manual, as it were. Personally, however, I'm really not happy at all with the results. Parent-helper and parent-worker sound stupid to me.


Notes:
Master/Slave Terminiolgy Removed from Python
Sep 2018, VICE
https://motherboard.vice.com/en_us/article/8x7akv/masterslave-terminology-was-removed-from-python-programming-language


Wednesday, September 12, 2018

Views of the Future


The expanding brain meme is in order.

Look, I'm not in love with Elon Musk or anything, but I got pretty pissed when I read a Futurism article about his interview with Joe Rogan.

I can obviously care less about whether or not somebody "inhaled," I mean, I was around during the Bill Clinton administration, after all. Not to mention, what to hell do people think is going to happen on the Joe Rogan show.

The part I'm upset about is where people seem to think Musk is being a loopy headfreak when he's asked about future-trending topics like AI sentience.

And that's where I'll cut - he's asked when he thinks AI will become sentient. And he responds, very slowly like every other answer he gives during this kind of painful interview - he says something that sounds to many like, "We're all trees, man."

You're thinking, "he didn't answer the question at all." But I think you're wrong. His actual answer went like this:

Joe: How far are humans from creating sentient AI?
Musk: You could argue that any group of people — like a company — is essentially a cybernetic collective of human people and machines. That’s what a company is. And then there are different levels of complexity in the way these companies are formed and then there is a collective AI in Google search, where we are also plugged in like nodes in a network, like leaves in a tree. 
We’re all feeding this network without questions and answers. We’re all collectively programming the AI and Google. […] It feels like we are the biological boot-loader for AI effectively. We are building progressively greater intelligence. And the percentage that is not human is increasing, and eventually we will represent a very small percentage of intelligence.

This is coming from a guy who is making, besides reusable rocket ships and statewide underground transport tunnels and consumer blowtorches, a neural interface system. And in case folks forget, because this was a while ago, he made PayPal. So he knows how things work. He may not be Jaron Lanier but he understands how these things work, and he's telling us that in regards to AI and sentience, we're asking the wrong question.

I'm going to elucidate what I think he was trying to say when he answered this, or at least what maybe he should have said considering his audience in this particular venue.

First, there is a fine line between us and AI. When you consider that we as users are the training program for many of these tools, such as a massive search engine, it should make you wonder which part of it is us and which part the machine.

Second, when we realize that we have essentially been programming these things since the dawn of what I will call for lack of a more neutral word, the surveillance state, we must admit the sentience is already here.

So, even more briefly, what he could have said to the question - when will AI become sentient - was that AI is us, and it's already happened.


All this being said, I spent a good three hours watching a debate on the floor of the Parliament about the real threats of AI, not to mention having watched a good thousand hours of other scientists, researchers, engineers, and other professionals talk about their work in the field. With that, I concluded to myself that a General Artificial Intelligence will not be here for another couple generations, and that the threat will be from ourselves, not a sentient system.

Musk seemed to be in concert with the latter part of this, although he's still pretty spooked about what could happen.

This brings me to my next point, which is that Musk is a guy for whom the world moves way faster than most. We have to remember, if not for him, a working reusable rocket would have taken another 50 years at least. For example, if NASA of some other government agency were in charge of the project. It doesn't mean that because of him the world moves faster, but that when he talks about things, he sees them from a very different perspective, one where amazing things you can barely imagine can happen tomorrow. And he probably assumes that you are in his head with him, which you are not, and Joe Rogan is not, and apparently this writer for Futurism is not.

But fellow speculative fiction enthusiasts are, and hopefully they would agree that it's quite entertaining to listen to this guy talk about things. (Again, if you can get around the 10-second pause before answering every question.)

Finally, full disclosure, I along with Musk and Neil deGrasse Tyson am a no-case guy (no cellphone case that is).

Post Script
The comment about how we're probably living in a simulation - I'm not a proponent of this, as there are more convincing arguments against it than for it coming from the physics world. However, his comment on how in the future we will all be living in a game, and it will be indistinguishable from reality, well, I'll ask folks to read Charles Stross' Halting State, or Vernor Vinge's Rainbow's End or ... many scifi novels written in the past thirty years? (And also a Network Address post from wayback.)


Monday, August 13, 2018

Where's My Thought Translator

Psychedelic Artist Here

Text rules the internet. We know that. (It's why you can't google smells.)

But humans aren't born to read, and if we had the choice, we would much rather talk to things than write to them. This is why speech recognition has become such a big deal.

For a while there, voice rec was in trouble. Kind of like how autocorrect was in trouble, maybe you remember that more.

Then came all these neural nets and backward propagation, and now we have mindbots playing video games better than humans. Machine learning artificial intelligence is on a winning streak these days. But it's funny that with all this talk about biased algorithms, we often fail to recognize one group of people that are consistently being left out, even from the most basic of services offered by our digitally-assisted exocortices.

The people left out are those who are hard of hearing, who cannot see well, or who speak with impediments. Your phone may be great at deciphering your drunk-ass request for a cab at 9 p.m. on a Tuesday, but that's because it's been trained on a few happy hour voices just to deal with that particular instance.*

Your phone has not been trained to talk to you after your face underwent reconstructive surgery, messing with your speech. Or when you can't speak at all because you can't hear. The problem is that speech recognition technology hasn't been trained on sign language.

Deep Forest Deep Dream Mandala by Nora Berg

Along comes this gentleman, who basically mixes a Nintendo Wii with an Amazon Alexa to allow anyone using sign language to communicate with their omniscient overlords.

The way he does it is really clever though, as he uses a publicly available neural net to (literally?) teach the network sign language, whereupon he further develops that recognized sign language into text, and then ties it all to a text-to-speech program, which can then finally talk to Alexa.

What I want to know is, how long do I have to get this thing to watch me before it can recognize my thoughts and turn them into words so that I can finally be one with the internet??

Sign-language hack lets Amazon Alexa respond to gestures
July 2018, BBC News

Post Script
The Alcohol Language Corpus (of drunk speech)

Between 2007 and 2009, linguistic researchers from the Bavarian Archive for Speech Signals at the Ludwig Maximilian University of Munich and the Institute of Legal Medicine in Munich in Germany convinced 162 men and women to get drunk [and talk Drunken John into a voice recorder].
-Fast Company

Post Post Script
I'm sure I concur with everyone else who is saying that Alex Grey is the only artist where Deep Dream makes his work look worse! Still, how is it that I can't find more of it online? Also, can't someone train a network on his paintings instead of the corpus they used for Deep Dream?

Alex Grey Meets Deep Dream

Alex Grey As Himself

Post Post Post
Accidental robot-hole. I should know way more about neural style transfers, which came out last year and as an extension of the Deep Dream project.

Style transfer is where you anti(?)train a neural net to overfit for a particular style, and then fit that style onto a new image. It makes way more sense when you see it.

Here, a painting of Napoleon, trained on Raphael's School of Athens:
Meta Napoleon Bonapart, unknown origin

Here, a famous Escher painting trained also on the School of Athens:
Deep Escher in Athens, unknown origin
Neil Degrasse Tyson x Kandinsky
Brad Pitt x Duchamp(?)
Abstract Cat

Here's a place that will do it for you, but they have their own pre-designed style templates (called pastiches) - Deep Art

There's also a few apps that do it, Prisma is one.

And finally, don't forget who started this all:
Abstraction of a Cow, Theo van Doesburg, circa 1900

Abstraction of a Tree, curated from Piet Mondrian, circa 1900

Friday, August 3, 2018

This Internet Secret Will Make You Crazy



Gotcha! If you're reading this, then you obviously clicked, which makes you a lot more like everyone else than you thought you were, unless you already thought you were.

This phrase "will make you" is the number one choice for writers trying to bait readers into clicking their article, and it's twice as strong as all the others.

For the most part, I could give a crap about search engine optimization and clicks. Maybe I should, since I'm trying to sell a book on my other blog. Nonetheless, I could care less. But I was looking up SEO tips just to make sure I don't care.

And now I'm sure I don't care about SEO on my website. What I found in the meantime will make you cry. Just kidding, that's another clickbait headline. I found this handy graph that lists the most irresistible headlines.

Headlines aren't the only thing. There's a few things you can do to get your site more traffic. First of all, you need to put original data on there. Do some research, make your results look pretty, and post it up. Then people will go to your page because you have something nobody else does. Pretty sure I haven't much original research on here, except for some pictures here and there.

You can also beg other people to link back to your page from their own sites. Being that my site is anonymous, I don't see myself asking anyone for endorsements.

Or you can make sure your posts are 1900 words. That's a bit too long for me.

And finally, you can post naked pictures of yourself. Just kidding.

And finally, you can use headlines that grab hold of people's limbic system, taking control of their motor cortex and making them share, like, and retweet the sh** out of your content as if it was actually good.

I met a friend of a friend once who worked for a digital news outlet in our area; his job every morning was to go on Google trends, find all the buzzwords, and rearrange all the articles on their site to use those words, probably embedded in these catchy phrases you see above. That sounds like a cool job, despite the fact that I find something wrong with all this.

There's something about it that seems unfair. As a teacher all my life up until recently, I always had a strong urge to empower people's minds and diffuse confusion. Behavioral psychology is a powerful thing, and with it, yes I'm going to say it, comes great responsibility. Maybe a part of me believes that once you discover a trick like this, you can only use it for good, not to make money, or to influence politics, or to get yourself a date.

But if you could care less about responsibility or morals or enriching the minds of others to be more powerful and less susceptible to deceit as opposed to taking advantage of those breaches in our mental fortitude, then this stuff is for you. Game on!

Tuesday, July 17, 2018

On Ownership


NASA doesn't get sued, but when it does, it's over property rights.

Tl;dr - 50 years ago, Neil Armstrong gives a 10-yr old a vial of moon dust. Today, 60-yr old 10-yr old girl pre-emptively sues NASA in case she gets charged by NASA for ownership of said dust.

Hmmm, you wonder. Where was I when Neil Armstrong was handing out moon dust? Then you ask, wait a minute, who owns the moon?

NASA owns the moon. Actually not NASA, but the USA owns the moon.

Apparently there's a little black market for space paraphernalia (10's of millions at least), and the SWAT team have been known to work real hard to bust up that market and return such otherworldly items to their rightful owners.

There's a grey area in that little black market, however, which says that 1. Astronauts should be able to do whatever they want with the stuff they bring home, and that 2. There isn't a specific law saying that private persons can't own moon dust.

I don't know enough about property rights to keep this conversation going. But I do harken back to the distant Native Americans' puzzlement about fences and property, and wonder what they would think about this.

Notes:
Woman sues Nasa over ownership of moon dust vial
June 2018, BBC News

Sticks and Stones



Language software for recruitment technology helps businesses to hire the right people. The right software will know which words to use in the recruitment process to get targeted results.

First of all, what the hell am I talking about? You know when you've been looking for a job for the last three months and your life savings is running out and you're really thinking what it would be like to wake up in your car and shower at the YMCA and how it might not be "as bad as they make it out to be" and then you see a job opening that you're qualified for and you're like, nah, too much hidden hostility in my semantic analysis of this job posting?

Me neither. But if you're the kind of person who already has a great job making lots of money and a huge difference in the way people live in the world together and you're trying to propel your future even further, then this is for you:

We might not realize it, but the terminology used in job descriptions turns some people off, subconsciously or not. And if you're trying to be a good business, you need to get everybody on board, not just the typical Mr. Obvious, or Ms. Obvious as the case may be.

Women don't like 'coding ninja' as it represents hostility in the workplace. 'Stakeholders' means non-white people need not apply. 'Competitive' and 'leader' are discouraging terms for females, according to one textmaster recruiter optimizer, whereas 'support' and 'interpersonal' are inviting to female candidates.

Businesses care because the more balanced your workforce, the more profitable your work. This would explain why there are so many of these text-analysis for job-posting services out there, and enough for me to find an article on it in the national news.

Here's an example of why we need to think more about inclusionary language in our tech-driven, text-based world:
Man is to computer scientist what woman is to homemaker

Notes:
Why some job adverts put women off applying
July 2018, BBC News

Post Script:
Since we're getting all semantimaniacal here, can I just point out that the word 'inclusionary' shows up as a misspell on my autocorrect, with 'exclusionary' the suggested replacement. Just saying. (And no, semantimaniac isn't in there either...).

Tuesday, July 10, 2018

Post Script

Max Ernst and the rest of the Surrealists experimented with 'automatic generation' a hundred years ago.

I'm reading an article here about how we're now using 'robot-generated script' to make things funny. Because, you know, robots are stupid, and we like to laugh at stupid things.

You give a script-writing robot a thousand Seinfeld episodes and ask it to make a Seinfeld episode. And when it messes up, we laugh.

I'm saying all this half tongue-in-cheek. Don't get me wrong, a lot of this stuff is funny. Maybe these smarty pants experimenting with neural  nets can give you plenty of examples of what I'm talking about.

It's funny when a computer screws up. It's funny when anyone screws up. I had a classmate in third grade who wore yellow-tinted stonewash jeans, and I remember making fun of him and getting in trouble for it. The stonewash was right on for that time in the world of fashion, but the yellow not so much. Things have to be messed up to be funny, but not too messed up.

There's a good formula for funny (and a good graph too) which says the level of funniness in a joke is a function of the probability of the punchline vs your expectations. Researchers exploring 'creative AI' look at the novelty vs the quality, because to be creative we have to be new, novel, unexpected, but not completely out of the ballpark.

My friend in 3rd grade got the stonewash right, but not the yellow dye. A trained neural net (I call them all robots for short) gets most of the material right, but once in a while it throws in there something crazy (something wrong) and we laugh.

The part where things get tricky is when we stop to consider what  we're laughing at - is it the abstracted novelty of the output, or is it that we've assigned agency to the network and are now making fun of it for messing up.

I'm just saying, we might not want to get into the habit of poking fun at these things - not because they will one day retaliate and destroy us, but because they are a reflection of ourselves.


image source: Max Ernst 1937 L'Ange du Foyer - Engel des Kamins

Was That Script Written By A Human Or An AI? Here’s How To Spot The Difference
Jun 2018, Futurism

Anatomy of a Joke
2012, Network Address

Botnik is a community of writers, artists and developers using machines to create things on and off the internet.