Saturday, December 16, 2017

Blockhain-Breedable 256-bit Genomes For Sale


CryptoKitties craze slows down transactions on Ethereum
Dec 2017, BBC

But that's not how any of this works.

Digital currencies were supposed to be used to trade money online, not to serve as a lab for Frankensteined bit-genomes.

But of course, this IS how these things work. The internet was supposed to be for sending science articles among scientists.

Ether is a cryptocurrency like Bitcoin, supported by a record of transactions that is copied and stored on lots of different computers spread all over the world. In this case, that network is called Ethereum. People are using it to buy pizza and drugs, and to make money by speculation.

But people are also now using it to play with digipets, or cryptokitties. But more, they aren't just playing, they're breeding these things by turning a digital money transfer service into a bio-inspired creation-machine. 

Each kitty is unique, and their unique DNA can lead to four billion possible genetic variations when they breed. I wish I knew more about how numbers work. There is a lot of computation going on here in order to do this. People are worried that this "frivolous game" is holding up real business.

You know, until it turns out that the game is the real business.

Post Script
Absolutely completely unrelated - in case you haven't noticed, google owns pinterest, and when you image search, there's a high chance you're being directed to pinterest to see the full image of the thumbnail you're viewing in the google results. But you have to be a user to get in and see the image. Smart way for google to make sure everyone is using its other services. And a good way for folks to understand how monopolies work, and corporate/capitalistic culture in general.

Unless you would like to add "-site:pinterest.com" to you search results. For example, the search bar would contain "old school refrigerators -site:pinterest.com". Helps.

In fact, do it with and without and see just how much google has gamed the image image market.

Friday, December 15, 2017

Mind Says What


Hypnotic suggestion prevents action, not recognition
Nov 2017, Chris Lee, Ars Technica

If someone under hypnosis is told that their view is obscured, do they really not see or are they unable act on what their brain is yelling at them?

So, what have we learned? First of all, in this task, we know that the brain still sees the objects on the screen, but that hypnosis suppresses a response to the object. -arstechnica


Good writer for Ars Technica describes this experiment in hypnosis in good technical detail - about the actions, reactions and preactions undertaken by the brain in its attempt to make us do things.

Taking direction 'from ourselves' is pretty new in human history. The Jaynes' theory of bicameral conscisouness says that it was our taking directions from others that eventually allowed us to 'tell ourselves what to do'.

Think what you will of hypnosis, but the fact that it seems to work on people is evidence that a part of our brain prioritizes following instructions over personal, subjective volition. The higher, more developed parts of our computer-for-a-head generate the instructions nowadays. But that apparatus may have originally been built to only listen, not to generate.

The generation of the instructions came from gods, from outerspace, from somewhere outside the person. Shamans, priests, king-gods were the people who caught the information first - when to raise crops, when to migrate. Everyone else would listen. They were either the only ones who could hear, or the only ones allowed to listen to the outer-body authority. At that time, we needed to prioritize following instructions - even if they seem counterproductive or impossible - over trying to come up with our own. And hypnosis is a vestige of this.



The Power of mind: Blocking visual perception by hypnosis
B. Schmidt, H. Hecht, E. Naumann & W. H. R. Miltner
Scientific Reports 7, Article number: 4889 (2017)
doi:10.1038/s41598-017-05195-2

There's a lot of references here more than 20 years old, but maybe that's the nature of hypnosis literature?

Also, check out the Julian Jaynes Society for info on his bicameral mind theory.

Sunday, December 10, 2017

Reality Generators


What's real these days? Remember a few years ago an application that takes 5 consecutive photos of your family and blends them together so that nobody is blinking or making a stupid face? It takes the best faces of every person in the series of photos, and puts only that face in the picture. The final, fused photo documents a moment that never existed. Rather then, it is not documenting a moment, it is creating a moment.

Moving on, we now see an application that creates faces from scratch. The system looks at thousands of faces and learns what a face is, and then creates its own faces.

I'm thinking here about facial recognition and how I would like to now have a 'fake' face for a face so that nobody knows what my real face looks like. Can I do that? Better yet, can I have a nice little progam that makes entirely fake pictures from scratch, uses them to populate a fake facebook page, and then makes fake friends with their own fake pictures all talking to each other - an entirely fake social ecosystem or social network? Can we do that? How fake can we get until the fake thing is bigger than the real thing?


These People Never Existed. They Were Made by an AI.
Oct 2017, futurism.com

As part of their expanded applications for artificial intelligence, NVIDIA created a  generative adversarial network (GAN) that used CelebA-HQ’s database of photos of famous people to generate images of people who don’t actually exist. The idea was that the AI-created faces would look more realistic if two networks worked against each other to produce them.

Making Monsters


The fascinating part about this is how alligator embryos are being modified to grow like a dinosaur. If I understand correctly, this kind of manipulation can only be done to an embryo living up to 28 days or so? After that it's considered alive, or at least it's considered wrong to keep an experiment alive past that time. Because it's immoral? Because it can reproduce and make uncontrollable monsters? Because that's just what the law says?

How dinosaur scales became bird feathers 
Nov 2017, BBC

The genes that caused scales to become feathers in the early ancestors of birds have been found by US scientists.

By expressing these genes in embryo alligator skin, the researchers caused the reptiles' scales to change in a way that may be similar to how the earliest feathers evolved.

Prof Chuong from the University of Southern California, in Los Angeles.

"You can see we can indeed induce them to form appendages, although it is not beautiful feathers, they really try to elongate" he explained of the outcome. They are likely similar to the structures on those feather-pioneering dinosaurs 150 million years ago.

[genes are complicated]

Modern feathers involve a range of different genes working together and being expressed at the right time and in the right space during the embryo's development. This new work helps to establish how feathers initially evolved, around 120 to 150 million years ago, but hints at five separate genetic processes active in birds that needed to work together to create modern feathers.

Saturday, December 9, 2017

How To Be Human


There is a chatbot that pretends it's a kid so that it can catch child predators. It doesn't entrap them into doing anything illegal; the point is to make the offender aware that what they're doing is wrong.

The part I found most interesting about this was how the developers used real people from the sex crime world - people who had been preyed upon - to help design a believable-sounding bot.


The chatbot taking on Seattle's sex trade
Nov 2017, BBC

The challenge for developers was to make sure this chatbot was authentic. Any unusual behaviour, or nonsensical response, would tip off the target.

"We work with survivors of trafficking to ask them how a conversation like this would go," explains Mr Beiser.

It's the small touches that help here. Replies aren't instant. There is sloppy, bad English. It's by no means perfect, but during the bot's test phase earlier this year, 1,500 people interacted with the bot long enough to receive the deterrence message - a remarkable completion rate given the bot will ask for a selfie of the buyer as part of that conversation.

As more people use the bot, the smarter it could potentially become. The project has the backing of Microsoft, one of the tech firms leading the way on natural language research.

Think About This

Dr. Frankenstein Light Switch, turn it up.

Carefully crafted light pulses control neuron activity
Nov 2017, phys.org

Specially tailored, ultrafast pulses of light can trigger neurons to fire and could one day help patients with light-sensitive circadian or mood problems, according to a new study in mice at the University of Illinois.

The study used optogenetic mouse neurons - that is, cells that had a gene added to make them respond to light.

"What we're doing for the very first time is using light and coherent control to regulate biological function.

Sunday, November 26, 2017

Hiding in Plain Sight

This is obviously an air conditioner

AI image recognition fooled by single pixel change
Nov 2017, BBC

Computers can be fooled into thinking a picture of a taxi is a dog just by changing one pixel, suggests research.

The limitations emerged from Japanese work on ways to fool widely used AI-based image recognition systems.

Many other scientists are now creating "adversarial" example images to expose the fragility of certain types of recognition software.

"There is certainly something strange and interesting going on here, we just don't know exactly what it is yet," he said.

Friday, November 24, 2017

Just Here for the Diamonds


IBM pitches blockchain for cannabis sale
Nov 2017, BBC

Blockchain technology could provide a secure way to track the legal sale of cannabis in Canada, IBM has said.

IBM said: "Blockchain is an ideal mechanism in which BC can transparently capture the history of cannabis through the entire supply chain, ultimately ensuring consumer safety while exerting regulatory control - from seed to sale.

Technology company Everledger is already using the technology to verify the history of diamond transactions.
-BBC


However interesting to see the blockchain and the pot trade finally meet, I am here only looking at the fact that diamonds already have their own currency, as it were. The blockchain, which enables cryptocurrencies, is being used, so it seems, to give every commodity its own currency. Then again, it's not even the currency part that is useful here, but the recordkeeping part.

Diamonds demand the airtight recordkeeping that the blockchain provides. Not only do people want to know if there's is a blood diamond or not, they wan to know if it was created in a lab, by humans, or in the Earth, by physics. (In the end, it's hard to argue that there is any difference - the diamonds are identical whether they come from one or the other place.) See what happens.

image source: link

Got Your Semiotics Right Here


Running on Trumpism but without Trump
Nov 2017, BBC

Nothing to do with the president. Rather, this is a great example of what semiotics is. If I'm holding a sign that says "Trump" but what I really mean is "Gillespie", then I'm playing with semiotics. The sign itself is one part, and the meaning another. The sign can say lots of different things, and it can mean lots of different things too. And "no" doesn't always mean "no". And vice versa. And "would you like to come up and see my etchings" doesn't mean lets check out those etchings.

The fact that we can talk about things that aren't even there is what makes us special. They say, although I'm sure it'll be discovered otherwise one day, that if a monkey doesn't see something happen, then it doesn't happen. They can't talk about things that aren't there, but we can. And not only that, we can use signs that stand for one thing and use it to mean something else. So cerebral.

Quantum Next


In the future, everything will be "quantum"

New white paper maps the very real risks that quantum attacks will pose for Bitcoin
Nov 2017, phys.org

Quantum Resistant Coin (QRC)

Bitcoin and other cryptocurrencies will be vulnerable to attacks by quantum computers in as little as 10 years. Such attacks could have a disastrous effect on cryptocurrencies as thieves equipped with quantum computers could easily steal funds without detection, thus leading to a quick erosion of trust in the markets.

image source: link

Thursday, November 23, 2017

Fakes and Bots and Fakes and Bots


We finally realize that the "fake" topic on this site will have to take a rest - because we can no longer keep up with reality. It was fun while it lasted, but the number of headlines with "fake" in the title is just too damn high. 

Fake things have taken the world by storm in 2017, and we're all inundated. And that includes facebook.

In an effort to stop fake news, facebook turns your entire timeline into fake news.

Or, in other news, the digitally native, world-changing social platform attempts to make itself more intelligent by programming-by-semantics (i.e., using the word ""fake" to find fake stories, as opposed to some other more complicated algorithm, you know, like all the other very much more complicated algorithms they already use).


Facebook's fake news experiment backfires
Nov 2017, BBC

A Facebook test that promoted comments containing the word fake to the top of news feeds has been criticised by users.

The trial, which Facebook says has now concluded, aimed to prioritise "comments that indicate disbelief".

It meant feeds from the BBC, the Economist, the New York Times and the Guardian all began with a comment mentioning the word fake.

The test, which was visible only to some users, left many frustrated.

Post Script
For those upset about the Orwellian experiment that is social media 2.0, forget not - we are not consumers but participants/test subjects

Bots Gonna Bot

"ethnically ambiguous guy in white t shirt" returns me this picture of Keanu Reeves in a black t-shirt

Something about elves and the north pole:

Russian troll describes work in the infamous misinformation factory
Nov 2017, NBC News

“These troll farms can produce such a volume of content with hashtags and topics that it distorts what is normal organic conversation,” Clint Watts, senior fellow at the Foreign Policy Research Institute, told NBC News. “It’s called computational propaganda, the volume [at] which they push, false information or true, makes things appear more believable than they might normally be in an organic conversation.”

Writers were separated by floor, with those on the third level blogging to undermine Ukraine and promote Russia. Writers on the first floor — often former professional journalists like Bespalov — created news articles that referred to blog posts written on the third floor. Workers on the third and fourth floor posted comments on the stories and other sites under fake identities, pretending they were from Ukraine. And the marketing team on the second floor weaved all of this misinformation into social media.

Fake Status


Trump's Renoir painting is not real, Chicago museum says
Oct 2017, BBC

My Psy-ops Is Better Than Your Psy-ops


Facebook, Twitter and Google berated by senators on Russia
Nov 2017, BBC

Russian operatives, likely working from St Petersburg, provoked angry Americans to take to the streets [using social media and fake news], a US Senate committee heard on Wednesday.

Lawyers for three technology companies - Facebook, Twitter and Google - were told they were grossly underestimating the scale of the problem.

"You just don't get it," said California Senator Dianne Feinstein.

"What we’re talking about is a cataclysmic change. What we’re talking about is the beginning of cyber-warfare."

What Was I Just Computing Again


Forget about it: A material that mimics the brain
Oct 2017, phys.org

Lattice breathing, electronic forgetting, and proton doping, oh my.

It's hard to find a material that forgets, but they've found one (U.S. Department of Energy's (DOE) Argonne National Laboratory, in collaboration with others), and now they're trying to make a better computer by making it forget, because we forget, and we are the ultimate computing machine, despite what you might think these days.

Moral Outrage in the Digital Age


Haha this graph doesn't even mean anything, numerically, at least. Narratively, however, it gets the point across - viral emergence equals fast death.

Yale assistant professor Molly Crockett looks at how digital media changes the expression of moral outrage and its social consequences, and this WIRED article talks about her on the subject of getting mad online.

It's been a good year for speaking out against sexual harrassment and abuse. Maybe it's because the president of the USA doesn't deny that he is an abusive misogynist. Maybe it's because it's finally time for us to talk about how (typically) women have to put up with forms of social behavior more appropriate for animals than people. Regardless, it's definitely time to start talking about experiences of harrassment and abuse on the biggest megaphone humankind has ever created, Twitter. This comes in the form of the Me Too campaign, where people tell their stories too. That's where this article stems from.

Growing up in a hyperlinked world makes you suspicious of anything viral. Doesn't matter if it's good or bad, for or against; if it's viral, it's suspicious. Has this been engineered? Is there something inherent in the subject that makes it so susceptible to hyperlinked amplification? Is it just probability and statistics?

Why did Kony 2012 happen? At one point it was the most viral video ever. Ever. Oprah had something to do with that, just saying, in case you're looking for ingredients for a viral pie.

Point being, just because something's viral doesn't mean it's doing a good thing for the movement or campaign that it's a part of. Professor Molly Crockett, and the article below, explain this:

Me Too and the Problem with Viral Outrage
Oct 2017, Jessi Hempel for WIRED

It’s often the case that the people or organizations you shame “publicly” via social media will never see the criticism at all. Your social audience is generally a group of like-minded people—those who have already opted in to your filter bubble. Or as Crockett writes: “Shaming a stranger on a deserted street is far riskier than joining a Twitter mob of thousands.”

One of the chief reasons we decry the actions of others digitally is for our own reputational benefit—so those like-minded people will like us even more.

“People are less likely to spend money on punishing unfairness when they are given the opportunity to express their outrage via written messages instead,” she writes.

Post Script:
Can't help but raise Slavoj Zizek's ideas about consumerism and charity - how we engage in charitable acts as a way to morally license ourselves, to permit ourselves to participate in an economy and way of life that by its nature takes liberties away from others. Weak as it may be, there is a link here between this pursuit of moral license. We're all looking for qualifications for our actions (because every one of us knows that we can be called out any day for what we do, right in front of the whole world all at once). White folks want a license for black folks. Men want a permit that gets them into the women's club. The cops want to hi-five the civillians. The surer the gaurantee that you'll be let into the club, the greater the motivation to participate in any particular movement.

Slavoj Zizek on Consumerism and Charity: "First as Tragedy, Then as Farce"
https://www.youtube.com/watch?v=hpAMbpQ8J7g 

Notes:
Molly Crockett's TED Talk about morality and decision-making:
https://www.ted.com/talks/molly_crockett_beware_neuro_bunk

Friday, October 20, 2017

Securing the Noosphere


Russian Facebook ads featured anti-immigrant messages, puppies, women with rifles
Oct 2017, Ars Technica

The study of memetics as the spreading of information, and articles about the science of psychological influence of narratives as studied by DARPA, are just some of the posts that have come up here on Network Address. (See the Post Script below.)

Many years prior two us seeing something like this happen (Russian Facebook election interference), something which is very easily understood by a lot of people, and the threat of which is also very easily understood by a lot of people. Totally not into scaremongering, but definitely into the study of memetics and how it works. As we can see now, it is a very important topic.

It should be noted here that the contagion theory of memetics propogation has been deflated as of late. It's just too simple. There's lots of work being done on what makes a thing viral, and it gets less and less to do with infectious disease theory. Something about affect and timing and network distribution.

Regardless, the infectious disease model was never the one used for memetics. By its very name in fact, memes were thought to behave as genes (and were thus named by Richard Dawkins in his 1982 book, The Extended Phenotype). The 90's saw some other folks pick up the idea and write some books, but by the eearly 2000's it was dead. Leading up to 2010, it became a sort-of underground thing, thanks to the ease of creation and transmission of the macro image series (this is, perhaps, the formal name for this instantiation of a meme). The meme - the new meme, not so much as Dawkins described it, but as a macro image series specifically - seems to have thoroughly saturated mainstream culture, as evidences by the fact that your mom probably makes them and shares them with her friends. Or in other words, memes are mainstream because Russian digital soldiers are making them and sharing them with scared, synapse-deficient Americans.

Anyway, hopefully the meme is getting its due recogntion, and hopefully it can be seen not just as a funny picture, but the unit of cultural transmission that it was intended to represent.

image source


Post Script
Recombinant Memetics and Narrative Networks
Network Address, 2013
No surprise, DARPA's been doing research on narratives for quite some time now; how else to explain the process of inculturation perpetrated on susceptible patriots by terrorists?

The Inward Turn of the Narrative
Network Address, 2012
1970's book by German scholar Erich Kahler, whose view of the modern world is “the steady evolution of consciousness in the direction of the demythification and secularization of wider and wider areas of human life”. Good read on cultural complexification

The Meme Wars Instruction Manual May Be Written By Robots
Network Address, 2013

Thought Contagion
Network Address, 2012
Summary of a book about how belief spreads through society (the new science of memes)
Aaron Lynch, 1996

And this one is just for fun, and for the art history folk out there:
The Macro Image Series and the Dematerialization of Artifact
Network Address, 2013

Thursday, October 19, 2017

Zero Mind


DeepDream is still the greatest thing to come out of artificial intelligence neural nets.

Today is a twofer. Not only has it occured to us that AI needs ethics training, but it turns out that in order to do some things, it needs no training from us at all.

Alphabet's DeepMind forms ethics unit for artificial intelligence
Oct 2017, phys.org

First of all, I don't know about you, but I need Human Subjects Research (HSR) training before I can conduct experiments involving humans. Regardless, the AI region of the Google empire now has a way to question and guide the ethical implications of its human-like thinking machines. (Wait a minute, doesn't that name belong to IBM?)

This is probably a good thing, since they already control the stock market (high frequency trading), and your social life (no explanation necessary). I just hope they have a diverse staff there on the ethics panel, because well, does anyone remember the Google gorilla fail?

How about the one where hundreds of bots were released on twitter in a competition to see who could make the most convincing human-analog, but then some of the algorithms were so good that some people started to flirt with them, and the creators really had to ask themselves when or how they should break it to the poor souls? Those were the early days of experimenting on people via the digital world (2011).

In fact, the head of those experiments, Tim Hwang was very recently named director of the Ethics and Governance of AI Fund, which does research on ethics and AI.

Having covered that, it comes out simultaneously that Google's same DeepMind (the one that did DeepDream and AlphaGo) has now taken unsupervised learning to the final frontier. Instead of teaching the program what to do or even how to learn, they let the thing figure out for itself how to play the game of Go, and it still beats the human. Done. They call it AlphaGo Zero, because it starts with nothing.

Google DeepMind: AI becomes more alien
Oct 2017, BBC


Post Script
Social Bots
Network Address, 2012

Fake Data Police

A fancy tin foil hat is still a tin foil hat.

Study claims vaccines-autism link; scientists find fake data, have rage stroke
Oct 2017, Ars Technica

It's that time of year to get your flu shot. Unless you're these two scientists, in which case you'll probably avoid the flu shot and take some homeopathic neurotoxins instead - because if you don't know you have the flu because you're brain is broken, then do you really have the flu??

On a serious note, however, it is really reaffirming to see that science works, and in particular because of that last step of the scientific method - peer review. I post this because it's a good example of a bad scientific article. It sounds scientific, and it looks scientific, with all those numbers and charts etc. But you can read right here the problems that many scientists have with it, and let it be a lesson on how to be critical when looking at science papers.

Beyond anything else, however, I would still take advice from Michael Shermer's Carl Sagan's Baloney Detector:

How reliable is the source of the claim?
Does the source make similar claims?
Have the claims been verified by somebody else?
Does this fit with the way the world works?
Has anyone tried to disprove the claim?
Where does the preponderance of evidence point?
Is the claimant playing by the rules of science?
Is the claimant providing positive evidence?
Does the new theory account for as many phenomena as the old theory?
Are personal beliefs driving the claim?

Notes
Subcutaneous injections of aluminum at vaccine adjuvant levels activate innate immune genes in mouse brain that are homologous with biomarkers of autism
Dan Li, Lucija Tomljenovic, Yongling Li, Christopher A.Shaw
Journal of Inorganic Biochemistry
Volume 177, December 2017, Pages 39-54
https://doi.org/10.1016/j.jinorgbio.2017.08.035



Tuesday, October 17, 2017

Fake Fonts


"Font Detectives" Use Their Expertise to Solve High Stakes Cases
Sep 2017, WIRED

Interesting article about the folks who know fonts so well they are called upon to ferret fake documents, for example Barack Obama's birth certificate, the fake one that is. Something like forensic handwriting analysis but for computers (not really; sort of).

Speaking of which, I must relay a good story about handwriting analysis - I once decided to sell a Johnny Unitas signed football. Ebay requires certified signatures before selling, so I sent the football to a group who specialize in sports memorabilia signatures. These folks, the group of them together, know by heart every signature there is.

When I went to pick up my (now certified authentic) football, I got to talking about the nature of the job with a member of this group. I was cleaning out my parents' attic selling all my old stuff, including my sports memorabilia, including a Don Mattingly signed baseball. I was thinking about the possibility of my 9-year old self buying a fake baseball, and couldn't help but notice that Don Mattingly had the handwriting of a 4th grader, which looked suspicious to my now 35-year old self. I did a quick search and saw that his signature had in fact evolved over his career, from that of a 9-year old girl to that of a sleep-deprived doctor in an emergency room.

So I ask this sports-signature expert how they can really know all these signatures if they change all the time like that. But then he dropped this one on me - Muhammad Ali. As you may know, he suffers from brain damage, specifically Parkinson's, which as you may know, makes your hand shake uncontrollable. He continued to sign things that continued to be worth money until the end of his life, which required signature experts to continue to be able to identify it, and I shit you not this guy told me that they could identify a real Ali until the very end. I don't know about you, but I find that hard to believe, but then again, that's what experts are for.


Post Script:

The Ampersand - The 27th Letter
Network Address 2012
Marginally related to fonts

Very Distributed Computing


New type of supercomputer could be based on 'magic dust' combination of light and matter
Sep 2017, phys.org

A team of researchers from the UK and Russia have successfully demonstrated that a type of 'magic dust' which combines light and matter can be used to solve complex problems and could eventually surpass the capabilities of even the most powerful supercomputers. -phys.org

image: 2001: Space Odyssey

Post Script

Weird Computers
Network Address 2013
Computers made of slime, crystals, and frozen light

Laws Meta Physical
Network Address 2013
The Matthew Effect, Zipf's Law, etc.


Monday, October 16, 2017

So Long Stabranja


Maybe it's all this Equifax bonanza stuff going down, but I thought a post about identity and security and automated account attacks would be appropriate.

I was very excited to be able to see my facebook account hacked in a (perhaps) methodical, slow attack that has left me unable to verify my own identity, i.e., access the account. I say perhaps because, perhaps, there is no method-making person behind this; maybe it's just a program following instructions. Regardless, I got to watch it happen, and I'd like to share.

In preface, it should be noted that here at Network Address, we certainly don't present ourselves as digital liberators, that is, computer hackers. However, the world that surrounds the activities of such folk are very interesting to us. Listening to Off the Hook on 99.5 WBAI and attending the HOPE conference at the Hotel Pennsylvania are a great source of the material seen on this site. If interested yourself, please look into these, they're very much worth it (The next HOPE is summer 2018, check it out...https://hope.net/).

Back to the matter. I wonder how common this is. I plan to do some research on this dating site that requires your fb as entry. I have many facebook accounts, and many from back in the day before you had to use real names. This one is Stabranja Bones, part of a project from  almost 10 years ago, about hick-hop (at the time this was something we made up, but it's apparently a thing now) and bronix (same, although it was called Brocabulary by reddit). So I access this dating site using one of my facebook accounts, unfortunately, a favorite that I'm sad to see taken away from me. Although, I'm glad I got to see it happen firsthand.

I'm on this dating site for a couple days, that's all I need. You know how these sites work, btw - if you leave your account vacant it will be used as a bot. There's no such thing as deactivating or deleting an account. Content has value and will not go to waste, no matter what you think or want. (Remember, when things are free, you're the one giving the value, not taking it.) We used to call this a zombie I guess, like you killed the account but someone else uses the empty shell, the carcass, to impersonate a real person. This makes the site look like they have more people than they really do, which makes the prospects of finding a date better, which makes the site more attractive, which makes it more likely that you'll pay for a subscription after your free trial. (If you're new to all this, just look into the Ashley Madison scandal, "angels" and "engagers" and etc.) So, I get into the habit of at least deleting all the uploaded pictures on the dating site account, posting new picutres of people that are certainly not me, and then "deactivating" it. I did this.

About a week later, I get a message from a friend of mine, one of the few people I have connected to the hacked fb account, and a person who, unlike myself, is active on facebook and notices these things - he asks me, in real life via text message, if I changed the profile picture on the facebook page. I did not. I assume that my tooling around with the dating site via the fb site had caused some inadvertent change. In the back of my mind, because I don't trust anything, I thought there was a possiblity that everything was already compromised.

About a week or two later I check back into the dating site, just to check up on things, since I was suspicious. I see a chubby Middle Eastern man has taken the place of my profile picture (which until then was a photo of a college friend of mine in drag), and yes, the dating site is still using my profile/account, but with this new chubby Middle Eastern guy as the primary avatar. I log back into fb and delete this guy's pic, and reinstate my old profile pic.

A month goes by. I then get an email stating that my password has been changed, if I didn't do that, I should check into it. I do. They're asking me to confirm my identity. They show me some pictures of "friends" to test whether I know them or not. Hmmm. Some of these people I don't recgnize (I only had 3 friends, this was a bogus account we did for fun, after all.) I fail the test. I try again. I fail again. I don't know these people. I'm locked out of the account forever.

I go back to my email account (a second account that I use for bogus accounts etc.). Gmail separates "social" emails to another page, so I haven't been seeing the updates from fb etc. I go into this "social" page of emails and see that my fb avatar has been accumulating friends for the past month. I imagine that friend requests are sent out by the hundreds, and someone, be they either real or not, is accepting. Now I have a whole bunch of "friends" who I don't know. And if this is going on for a month, and I'm not doing anything about it, then whoever is doing this (see me giving agency to an algorithm here?) is like "great, nobody's at the wheel, let's take control." My password gets changed.

I recall some time ago, my credit card company called me about potential fraud. Have you been to Florida recently, they asked. No. That's what we thought, you have some fraudulent charges, we're going to take them off and give you a new card number. How did you know, I asked. They bought hard hats from a Home Depot in Florida, and we thought that was strange. ... I thought it was strange that they thought that was strange. Anyway, they know this stuff better than I do, because once someone has stolen your credit card number, the first thing they do is to test it; they buy some stuff and see if they get flagged. They see if there's anyone behind the wheel. If not, it's all their's.

And now Stabranja is all theirs, whoever they are.


Afterword

The next time you hear something like "Facebook has reached x million users," be aware that these are not real people. They're empty shells. Their "likes" are empty as well. Also, the next time you are deciding whether it's worth it to pay for a subscription to that dating site, many of those people are not real. That is to say, they may have been real at one time, but they are no longer; they are also empty shells. 

Post Script

etymology of Stabranja Bones:
Stabroned (brain + stoned) + ganja. Yup. Producer of Brody Lambone, hick-hop sensation.

The Semibots Are Coming
Network Address, 2015

Sunday, October 8, 2017

Physiodata at Large



Drone detects heartbeat and breathing rates
Sep 2017, BBC

The system detects movements in human faces and necks in order to accurately source heart and breathing rates.

***
In other words, facial recognition algorithms have now gone totally apeshit.

I guess they're just looking at your neck, and reading your pulse that way. Do our faces (our heads really) move in the rhythm of our breathing, so slight that we might not see it, but a robotic eye-brain?

Now that we can get live physiological data from large groups of people, simultaneously, and in realtime, just by looking at them, it's no time to forget that we can read the date on a dime on the sidewalk from a satellite in orbit.

In extrapolation, all I can think about is Kim Stanley Robinson's Aurora (2015), where the multi-generational starship, equipped with a quantum computing AI instead of a captain, and after a civil war on the ship, finally "decides" that in some cases, it's better to let the air out of a biome than to let the people in it do harm to the ship, because, you know, for the greater good. The people don't die, at least most of them; instead they just get really, really tired and docile.

Narrative snippets have the ship dictating the "average pulse rate of the ship," meaning the average of every inhabitant of the ship,  data that an AI-equipped starship of the 22nd century can very capably know.

Who's about to riot? Those people with the quickening pulse, that's who. Face-recognition used to yield data on the outside, like your face. Now they can data from the inside. Maybe "angry faces" is easy to identify, and might be more predictive than pulse. Maybe it's the same things. But something about a drone I can't even see, knowing what's going on inside my body, makes me think we're already living in these science fiction novels.

image: Woody Allen on the couch in his 1977 film Annie Hall, BBC

Monday, September 25, 2017

Jellyfish Dreams

Signs of sleep seen in jellyfish
Sep 2017, phys.org

"It's the first example of sleep in animals without a brain."
-coauthor Paul Sternberg,  Howard Hughes Medical Institute (HHMI) Investigator at the California Institute of Technology, -phys.org

image source

Bots Made Me Do It


Twitter bots for good: Study reveals how information spreads on social media
Sep 2017, phys.org

Players:
Emilio Ferrara, a USC Information Sciences Institute computer scientist and research assistant professor at the USC Viterbi School of Engineering's Department of Computer Science, and a team from the Technical University of Denmark.

Experiment:
39 bots deploy "positive-themed" hashtags to 25,000 Twitter users for four-months.

Conclusion:
Information is much more likely to become viral when people are exposed to the same piece of information multiple times through multiple sources. "This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection," says Ferrara. -phys.org
https://phys.org/news/2017-09-twitter-bots-good-reveals-social.html

Source:
Bjarke Mønsted et al. Evidence of complex contagion of information in social media: An experiment using Twitter bots, PLOS ONE (2017). DOI: 10.1371/journal.pone.0184148

image source
image credit

Post Script:

Post from 5 years ago about this topic, check out Tim Hwang at the HOPE#9 conference talking about his ethically and legally dubious twitter-bot experiments on an unsuspecting cluster of 500 users:
Social Bots, Network Address, 2012

In case you were wondering the difference between robo- and -bot
Robo vs Bot, Network Address, 2013

Aaaaaand, why are we still not using the word "semibots?"
The Semibots Are Coming, Network Address, 2015


Friday, September 15, 2017

Man of the Year 2017


Look at him. He is the Pee-Wee Herman that you thought Pee-Wee Herman looked like after you found out he was a child molester (is that even true? No, I think he just got caught jerking off in public.)

Shkreli ordered jailed after online bounty on Hillary Clinton's hair
Reuters, Sep 2017

For sheer entertainment value alone this guy should get person of the year. He has added more priceless content to the interwebs than any other person (I guess this has been going on for more than a year, but we have to draw the line somewhere).

He raises drug prices so high that he basically kills dozens of people, and has absolutely no remorse. He goes to trial for securities fraud and tweets about how all the people involved are dumbasses, to the chastisement of his lawyer. He goes on trial and they can't even find a jury for him because he is so infamous for being a dirty asshole. His face is that of a sneaky shitbag (just look at him). His name, for christ's sake, is that of the sleeziest scumbag you ever heard of (Shkreli? Just say it out loud). He buys the secret and almost priceless Wu-Tang album for millions of dollars and then threatens to upload it to a torrent site so everyone can have it for free. I think he tried to pick a fight with Raekwon (am I making that up?). He uses twitter better than Donald Trump. (what does that even mean?)

I am definitely missing some things here, but we pause right after this - he puts a bounty on Hillary Clinton's hair, just one strand. You have got to be kidding me. He's serious, he's hilarious, he's preposterous, his moral compass is actually a piece of spin-art made on the boardwalk at the Jersey shore, his conscience is the evil, zero-fucks-epitome of all of us, manifest. He is the perfect child of corrupt capitalism, born fully formed from the head of the Merrill Lynch Bull.

In my series of coward-heroes, he is just the one I've been waiting for. First Jared Loughner, then Julian Assange, now this guy, Martin Shkreli. Man of the year, 2017.



Thursday, August 31, 2017

Milk Does a Body


Evolution of Adult Lactose Absorption

Farming, cheese, chewing changed human skull shape
Aug 2017, phys.org

The agricultural revolution put the human genome in the spin cycle of a washing machine. Plants, though fibrous, are easier to chew than animals. So during the thousands of years while we were discovering the magic of seed-sowing, and eating the results of our pre-science experimentation, our bodies (our jaws especially) were changing in response.

At this time in our development, the rate of change for a genome was equal to the rate of change of our culture, and our diet, and so the two were able to influence each other. Nowadays, those rates have changed so that our culture moves faster than the genome - i.e., just this week we saw the first 'living drug' that changes the patient's genome so that it attacks cancer.

Anyway, this is our history:

The largest changes in skull morphology were observed in groups consuming dairy products, suggesting that the effect of agriculture on skull morphology was greatest in populations consuming the softest food (cheese!).
-phys.org

I just have to rant for a moment about arguments for an extreme vegetarian diet. I'm all for being healthy and eating more vegetables and less animals, but to say that it is not natural for us to eat meat or dairy products is preposterous, and this is one of those reasons why. Furthermore, to conjure support for the rational adoption of a 'paleo' diet that mimcs the caveman's diet, because 'that is the diet that we evolved to eat', is also preposterous. A lot has happened since caveman days. Sure, the homo sapien is (?) years old (let's just say on the order of 100,000 years). But since that time, our genome has continued to change. Dairying practices are not 100,000 years old, but more like 10,000. And the impact this had on the homo sapien genome was tremendous.

We are the only mammal that maintains lactase into adulthood - that is a genetic modification that we did to ourselves. Nature did not do that to us; we did it. According to this article above, we also changed the shape of our skulls because of eating cheese.

So, in short, we are  complex creatures. We are more complex than any new diet can encapsulate. I'm not trying to tell anyone what to eat up in here. But I am saying that instead of asserting that there is something wrong with the way we are, and that we should try to be more like the way we were, the "natural way", is to ignore the unfathomably complex and epic journey we have made as a species, and that we should continue to make, as long as we don't all kill each other, and as long as the planet that gave us life doesn't go and kill us all.

Post Script
Here's a paper on lactase persistence, perhaps the greatest story in human culture ever, because it was the inflection point beyond which our culture moves faster than our genes can keep up with.
Evolution of lactase persistence: an example of human niche construction, 2011

Wednesday, August 23, 2017

The Natural Way

Robots eating robots.

'Cyborg' bacteria deliver green fuel source from sunlight
Aug 2017, BBC news

Scientists have created bacteria covered in tiny semiconductors [solar panels] that generate a potential fuel source from sunlight, carbon dioxide and water.

The so-called "cyborg" bugs produce acetic acid [vinegar], a chemical that can then be turned into fuel and plastic.

After combing through old microbiology literature, researchers realised that some bugs have a natural defence to cadmium, mercury or lead that lets them turn the heavy metal into a sulphide which the bacteria express as a tiny, crystal semiconductor on their surfaces.

Dr Kelsey Sakimoto from Harvard University in Massachusetts, US:
"We grow them and we introduce a small amount of cadmium, and naturally they produce cadmium sulphide crystals which then agglomerate on the outsides of their bodies."

They have an efficiency of around 80%, which is four times the level of commercial solar panels, and more than six times the level of chlorophyll.
-phys.org

POST SCRIPT
The Energetically Autonomous Tactical Robot (EATR) was a project by Robotic Technology Inc. (RTI) and Cyclone Power Technologies Inc. to develop a robotic vehicle that could forage for plant biomass to fuel itself, theoretically operating indefinitely. It was being developed as a concept as part of the DARPA military projects for the United States military. [And so it eats dead bodies too]

Tuesday, August 22, 2017

Off the Grid


Brooklyn's social housing microgrid rewrites relationships with utility companies
Aug 2017, The Guardian

"Microgrids offer something that rooftop solar alone cannot: the ability to leave the grid entirely."

Until the shit goes down.

I am totally into the sentiment here, and I hope this triggers copycats all across the city, and every city. But there is no such thing as an island in the middle of a city; unless of course you're an actual island. After the power went out during Superstorm Sandy, there was not a single hot (powered) outlet in the city that didn't have something plugged into it. There is no way that a community like this, with its off-the-grid resilience, would be spared during an emergency of like proportions. They would be inundated by others trying to charge their phones, and their phone chargers, and their phone charger chargers.

If you want to be able to maintain power in such an emergency where everyone around you does not have it, you're gonna need a lil military to go with that power grid. A security defense system that keeps people out, and maybe even a way to protect the people who live there as they go out into the rest of the city, because people will be pretty pissed that you get power and they don't. This is all part of the glaring hole in prepper mentality - you may be able to prepare for you and your family, but you can't prepare for others. When the shit really goes down, the most dangerous thing you will face is not food shortages but other people.

Friday, August 18, 2017

Unzipf


Zipf's law, top ten most favorite thing on Network Address. New theory ---

Unzipping Zipf's Law: Solution to a century-old linguistic problem
Aug 2017, phys.org

Sander Lestrade, a linguist at Radboud University in The Netherlands, proposes a new solution to this notorious problem in PLOS ONE.

...shows that Zipf's law can be explained by the interaction between the structure of sentences (syntax) and the meaning of words (semantics) in a text.

"In the English language, but also in Dutch, there are only three articles, and tens of thousands of nouns," Lestrade explains. "Since you use an article before almost every noun, articles occur way more often than nouns." But that is not enough to explain Zipf's law. "Within the nouns, you also find big differences. The word 'thing', for example, is much more common than 'submarine', and thus can be used more frequently. But in order to actually occur frequently, a word should not be too general either. If you multiply the differences in meaning within word classes, with the need for every word class, you find a magnificent Zipfian distribution. And this distribution only differs a little from the Zipfian ideal, just like natural language does.
-phys.org

WHAT'S ZIPF

The most frequent word in a language, or in a book, or whatever, will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.

(straight from wikipedia, I mean it's all numbers anyway, right?)

For example, in the The Brown University Standard Corpus of Present-Day American English, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's Law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852). Only 135 vocabulary items are needed to account for half the Brown Corpus.

The same relationship occurs in many other rankings unrelated to language, such as the population ranks of cities in various countries, corporation sizes, income rankings, and so on.
http://en.wikipedia.org/wiki/Zipf's_law

*Zipf's law is referenced in Science Fiction author Robert J. Sawyer's www.wake, when the main character is searching for intelligent life on the web.
http://en.wikipedia.org/wiki/Wake_(Robert_J._Sawyer_novel)

META

There's some other laws meta-physical, like Benford's Law:

In this distribution, the number 1 occurs as the first digit about 30% of the time, while larger numbers occur in that position less frequently, with larger numbers occurring less often: 9 as the first digit less than 5% of the time. This distribution of first digits is the same as the widths of gridlines on a logarithmic scale.


POST SCRIPT
other meta-phys laws etc.

Bursts
Network Address, 2012

Laws Meta-Physical
Network Address, 2013

Physicists eye neural fly data, find formula for Zipf's law
August 2014, phys.org

mathematical models, which demonstrate how Zipf's law naturally arises when a sufficient number of units react to a hidden variable in a system.

"If a system has some hidden variable, and many units, such as 40 or 50 neurons, are adapted and responding to the variable, then Zipf's law will kick in."

"We showed mathematically that the system becomes Zipfian when you're recording the activity of many units, such as neurons, and all of the units are responding to the same variable".

Ilya Nemenman, biophysicist at Emory University and co-author
-phys.org

Tuesday, August 15, 2017

Eyes on the Street


Computer 'anthropologists' study global fashion
Aug 2017, phys.org

What is the world wearing?

These scientists are using a deep learning object recognition program to discover visual patterns in clothing and fashion across millions of images of people worldwide and over a period of many years. They detected attributes like color, sleeve length, presence of glasses or hats, etc. (They end up filtering for only waist up photos). They ask questions such as, "How is the frequency of scarf use in the US changing over time?" or "For a given city, such as Los Angeles, what styles are most characteristic of that city."

The objective of this research is ultimately to "provide a look into cultural, social and economic factors that shape societies and provides insights into civilization."

Dashed lines mark Labor Day. Who said Americans don't like conformity?

via Cornell University: StreetStyle: Exploring world-wide clothing styles from millions of photos. arXiv. arxiv.org/abs/1706.01869

I imagined that stuff like this is already happening all over the place, in all kinds of other fields, and being integrated into global policy decisions and bottom-line business calls alike. But, this is not the case; this is still just the beginning. One thing I caught from this, some digital era common sense - Google Trends results for "scarves" peak right before they do on Instagram, because, presumably, people are searching for the thing, then they buy it, then they take pictures of themselves wearing it.

Post Script
These are the real people, not the algorithms, that analyze and predict the world of fashion:
Color Conspirators, Network Address

Monday, August 7, 2017

Believability Likability Falliblity


Why humans find faulty robots more likeable
Aug 2017, phys.org

If you've never watched the robots from Boston Dynamics get pushed over while they try to stand, you really should. (just search robot fail videos). If you've never thought, aw man, I feel really bad for that guy, then you should watch definitely watch it. Because you know, one day when a real robot-looking robot is taking care of your feeble parents, or you, you're gonna want to like that robot. And as it turns out, watching something struggle, whether it's a robot or a bug, or ^this kid trying eat cereal, when you watch someone mess up, it makes you like them more.

Says science:

"...participants took a significantly stronger liking to the faulty robot than the robot that interacted flawlessly." ... This finding confirms the Pratfall Effect, which states that people's attractiveness increases when they make a mistake," says Nicole Mirnig, PhD candidate at the Center for Human-Computer Interaction, University of Salzburg, Austria. -phys.org


Source document:
Nicole Mirnig et al, To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot, Frontiers in Robotics and AI (2017). DOI: 10.3389/frobt.2017.00021


Deanonymity Reanonymity


It is easy to expose users' secret web habits, say researchers
July 2017, BBC News

"Two German researchers say they have exposed the porn-browsing habits of a judge, a cyber-crime investigation and the drug preferences of a politician." -BBC

This isn't news. (So why am I writing about it?)

Despite what you might think, there is really no such thing as anonymous data, that is, when you have enough data.

Four data points is all it takes to identify or de-anonymize anonymous data, and this goes back to 2006. In other words, if I were to take a bunch of people and assign them serial numbers instead of their names and track every website they went to, all I would need is four websites from one particular serial number, and I would be able to identify who that individual is.

We forget so easily, but over ten years ago, AOL released a bunch of search data, and then took it back down the same day. They realized that you could pretty easily, no, very easily identify, or re-identify the people behind the search data. Then there was a competition to prove it, done on Netflix users, then Twitter users. Now, ten years later, we have already forgotten. Or perhpas, a tech writer at BBC is just looking for clicks. Or maybe he's just tyring to remind us.

There is no privacy on the internet.

On a positive note, your mom was right, you are special and unique and there's nobody else in the world exactly like you (and that's why it's so easy to re-identify your anonymized self).


Notes:
AOL subscribers sue over data leak
Ars Technica, 2006

AOL Proudly Releases Massive Amounts of Private Data
Tech Crunch, 2006

How hard is it to 'de-anonymize' cellphone data?
MIT News, 2013

Unique in the Crowd: The privacy bounds of human mobility.
Yves-Alexandre de Montjoye, César A. Hidalgo, Michel Verleysen & Vincent D. Blondel. Scientific Reports 3, Article number: 1376 (2013). doi:10.1038/srep01376

The official paper:
Paul Ohm. Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review, Vol. 57, p. 1701, 2010
U of Colorado Law Legal Studies Research Paper No. 9-12.
link

image credit: link

Monday, July 31, 2017

Chatbots Start Speaking Their Own Language and It's not Esperanto


Facebook Shuts AI System After Bots Start Speaking Their Own Language, Defy Human Instructions
July 2017, Hindustan Times

Don't even get 'em started. As I tear through Kim Stanley Robinson's Auroroa, a hard science fiction novel about a starfleet trying to colonize the Tau Ceti system where the ship itself, due to its quantum-computer-powered artificial intelligence system, becomes conscsious, I read this headline.

I'm not surprised, nobody is surprised, that these chatbots, these intelligentities, have surpassed our ability to decode what the f they're doing. Deep learning neural networks, for example, are unintelligible to us (correct this, I read a paper recently about some folks successful in figuring out how to read the hidden programs developed by these learning networks). Computers, ultimately, speak a language of computation, 1's and 0's. So it should be no surprise that, given a complex task of negotiating mock global diplomacy matters, these systems tack at a better way for working with each other.

Still, it is symbolic. And only to humans do symbolic things matter. Maybe there's a reason for that; maybe this is the beginning. That distance between now and the inevitable transition to the post-human world keeps getting shorter.

Post Script
It Begins: Bots are learning to chat in their own language
July 2017, Cade Metz, WIRED

Wednesday, July 19, 2017

Musical Memetics


Genetic Data Tools Reveal How Pop Music Evolved In The US
The Physics arXiv Blog, 2015. link
Source document:
The Evolution of Popular Music: USA 1960–2010
arxiv.org, 2015. link

I was into this article anyway, because it puts art and science together. Perhaps predictable for some - rap eventually takes over, dance music peaked in the 90's, country is NOT making a comeback (depends how they classify this, is there such a thing as 'new' country?).

But at the end, memetics rears its multi-headed head. Because the authors of this study used genetics-data analysis tools to do all this. Listen to all this memetalk:

"Musicians copy, repeat and modify song styles they like, this leads to a clear pattern of evolution over time. So it should come as no surprise that techniques developed for the analysis of genetic data should work on music data as well. “The selective forces acting upon new songs are at least partly captured by their rise and fall through the ranks of the charts,” they say."

Holy Bread

Salvador Dalí, Crucifixion (Corpus_Hypercubus), 1954

Dali wears bread on his head, 1958
Say What?
Vatican outlaws gluten-free bread for Holy Communion
July 2017, BBC

Bread used to celebrate the Eucharist during Roman Catholic Mass must not be gluten-free - although it may be made from genetically modified organisms, the Vatican has ruled.

In a letter to bishops, Cardinal Robert Sarah said the bread can be low-gluten. But he said there must be enough protein in the wheat to make it without additives.

The new rules are needed because the bread is now sold in supermarkets and on the internet, the cardinal said. [I don't understand; the Vatican sells its own brand of bread?]

Roman Catholics believe bread and wine served at the Eucharist are converted into the body and blood of Christ through a process known as transubstantiation.
-BBC

Every once in a while I forget that this is a real thing, like, during this particular ceremony the bread is -really- turned into the body of Christ. Really. Really?

But wait, you're telling me that the same entity that thinks its bread is turning into flesh is also making decisions about the validity of genetically modified organisms. CRISPR vs Christ?

Art and AI

Frederic Bazille’s Studio 9 Rue de la Condamine (1870) and Norman Rockwell’s Shuffleton’s Barber Shop (1950)

When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed
The Physics arXiv Blog via Medium, Aug 2014
Source document: Toward Automated Discovery of Artistic Influence

There's some stirring in the dusty world of art history, with the rise of encultured robots threatening human livelihoods. A promising young algorithm is set upon the world, fed with centuries of art imagery, design principles, and historical documentation. Our little algorithm then grows up and learns how to identify patterns in the art world better than its teacher.

In the two compared images above, this little art-historian algo recognized similar compositional patterns that had never been seen before - a hidden Norman Rockwell, see above.

First of all, as an art history major in college, I look at all the compared/related images discovered by the AI, and I am not so impressed. Maybe the general concept is what fails to impress me. When you follow the art world long enough you get to know something about how influence works, and about the power that one thing can have on an artist's work. And I say that there is no such thing as one thing.

The nature of the artist is to take the world at large, a fuck-tonnery of pre-filtered miasma, and to make sense, or at least to fight with it in a way that leaves a record of the battle, and for the benefit of humankind. To say that one painting influenced another because they have similar stylistic elements or design principles is kind of silly. I do understand that subconscious influence has its way with the creative process. But that refers to life as well as art. The new style checker cab, or Triangle shirtwaists, or bubble tea or middle-hipster Americana folk music or The Beatles or African masks or even syphilis could influence an artists' work.
Charge of the Lancers - Umberto Boccioni - 1915
Take ^Futurism, for example. It is inspired by, among other things, the fragmentation of society, be it from national upheaval circa the World Wars, or from the way the landscape looks while riding a speeding train which propelled people faster past the countryside than they had ever moved before. How does an algorithm find that?

I heard Picasso's mistress Françoise Gilot, in her bio of Pable Picasso, say some of his lobster paintings were a response to her hard-shelled personality which came to a head prior to their separation. Algorithms can see that? Nah man.

I know someone can come on here and argue with me, successfully, that artists do influence each other in simple visual ways, and at times, the visual connections can supplement a lack of historical data surrounding their work. But still, there is a need for socio-biographical data in all this, and I wonder if our little algo could be even better trained.

Now, all this having been said, I just finished watching this: Davos talk about the future of artificial intelligence, with IBM CEO Ginni Rommetti. She says that the goal of IBM's artificial intelligence (Watson, by the way, in case you forgot) is to extend human faculties, not replace them. According to her premonitions, the art historian is not doomed, rather it will be enriched and extended by our algorithmic overlords.