Monday, October 16, 2017

So Long Stabranja

Maybe it's all this Equifax bonanza stuff going down, but I thought a post about identity and security and automated account attacks would be appropriate.

I was very excited to be able to see my facebook account hacked in a (perhaps) methodical, slow attack that has left me unable to verify my own identity, i.e., access the account. I say perhaps because, perhaps, there is no method-making person behind this; maybe it's just a program following instructions. Regardless, I got to watch it happen, and I'd like to share.

In preface, it should be noted that here at Network Address, we certainly don't present ourselves as digital liberators, that is, computer hackers. However, the world that surrounds the activities of such folk are very interesting to us. Listening to Off the Hook on 99.5 WBAI and attending the HOPE conference at the Hotel Pennsylvania are a great source of the material seen on this site. If interested yourself, please look into these, they're very much worth it (The next HOPE is summer 2018, check it out...

Back to the matter. I wonder how common this is. I like to do research in less likely places, like dating sites, for example. For some of them, you need to use your facebook account to get in. For this reason, among many others, I have many facebook accounts, and many from back in the day before you had to use real names. This one is Stabranja Bones, part of a pretty extensive project carried out almost 10 years ago, about hick-hop (it's not real, we made it up) and bronix (the bro-language that had since become pretty popular, who knew). So I access this dating site using on of my facebook accounts, unfortunately, a favorite that I'm sad to see taken away from me (although I'm glad I got to see it happen firsthand).

I'm on the dating site for a couple days, that's all I need. You know how these sites work, btw - if you leave your account vacant it will be used as a bot. There's no such thing as deactivating or deleting an account. Content has value and will not go to waste, no matter what you think or want. (Remember, when things are free, you're the one giving the value, not taking it.) We used to call this a zombie I guess, like you killed the account but someone else uses the empty shell, the carcass, to impersonate a real person. This makes the site look like they have more people than they really do, which makes the prospects of finding a date better, which makes the site more attractive, which makes it more likely that you'll pay for a subscription after your free trial. (If you're new to all this, just look into the Ashley Madison scandal, "angels" and "engagers" and etc.) So, I get into the habit of at least deleting all the uploaded pictures on the dating site account, posting new picutres of people that are certainly not me, and then "deactivating" it. I did this.

About a week later, I get a message from a friend of mine, one of the few people I have connected to the hacked fb account, and a person who, unlike myself, is active on facebook and notices these things - he asks me, in real life via text message, if I changed the profile picture on the facebook page. I did not. I assume that my tooling around with the dating site via the fb site had caused some inadvertent change. In the back of my mind, because I don't trust anything, I thought there was a possiblity that everything was already compromised.

About a week or two later I check back into the dating site, just to check up on things, since I was suspicious. I see a chubby Middle Eastern man has taken the place of my profile picture (which until then was a photo of a college friend of mine in drag), and yes, the dating site is still using my profile/account, but with this new chubby Middle Eastern guy as the primary avatar. I log back into fb and delete this guy's pic, and reinstate my old profile pic.

A month goes by. I then get an email stating that my password has been changed, if I didn't do that, I should check into it. I do. They're asking me to confirm my identity. They show me some pictures of "friends" to test whether I know them or not. Hmmm. Some of these people I don't recgnize (I only had 3 friends, this was a bogus account we did for fun, after all.) I fail the test. I try again. I fail again. I don't know these people. I'm locked out of the account forever.

I go back to my email account (a second account that I use for bogus accounts etc.). Gmail separates "social" emails to another page, so I haven't been seeing the updates from fb etc. I go into this "social" page of emails and see that my fb avatar has been accumulating friends for the past month. I imagine that friend requests are sent out by the hundreds, and someone, be they either real or not, is accepting. Now I have a whole bunch of "friends" who I don't know. And if this is going on for a month, and I'm not doing anything about it, then whoever is doing this (see me giving agency to an algorithm here?) is like "great, nobody's at the wheel, let's take control." My password gets changed.

I recall some time ago, my credit card company called me about potential fraud. Have you been to Florida recently, they asked. No. That's what we thought, you have some fraudulent charges, we're going to take them off and give you a new card number. How did you know, I asked. They bought hard hats from a Home Depot in Florida, and we thought that was strange. ... I thought it was strange that they thought that was strange. Anyway, they know this stuff better than I do, because once someone has stolen your credit card number, the first thing they do is to test it; they buy some stuff and see if they get flagged. They see if there's anyone behind the wheel. If not, it's all their's.

And now Stabranja is all theirs, whoever they are.


The next time you hear something like "Facebook has reached x million users," be aware that these are not real people. They're empty shells. Their "likes" are empty as well. Also, the next time you are deciding whether it's worth it to pay for a subscription to that dating site, many of those people are not real. That is to say, they may have been real at one time, but they are no longer; they are also empty shells. 

Post Script

etymology of Stabranja Bones:
Stabroned (brain + stoned) + ganja. Yup. Producer of Brody Lambone, hick-hop sensation.

The Semibots Are Coming
Network Address, 2015

Sunday, October 8, 2017

Physiodata at Large

Drone detects heartbeat and breathing rates
Sep 2017, BBC

The system detects movements in human faces and necks in order to accurately source heart and breathing rates.

In other words, facial recognition algorithms have now gone totally apeshit.

I guess they're just looking at your neck, and reading your pulse that way. Do our faces (our heads really) move in the rhythm of our breathing, so slight that we might not see it, but a robotic eye-brain?

Now that we can get live physiological data from large groups of people, simultaneously, and in realtime, just by looking at them, it's no time to forget that we can read the date on a dime on the sidewalk from a satellite in orbit.

In extrapolation, all I can think about is Kim Stanley Robinson's Aurora (2015), where the multi-generational starship, equipped with a quantum computing AI instead of a captain, and after a civil war on the ship, finally "decides" that in some cases, it's better to let the air out of a biome than to let the people in it do harm to the ship, because, you know, for the greater good. The people don't die, at least most of them; instead they just get really, really tired and docile.

Narrative snippets have the ship dictating the "average pulse rate of the ship," meaning the average of every inhabitant of the ship,  data that an AI-equipped starship of the 22nd century can very capably know.

Who's about to riot? Those people with the quickening pulse, that's who. Face-recognition used to yield data on the outside, like your face. Now they can data from the inside. Maybe "angry faces" is easy to identify, and might be more predictive than pulse. Maybe it's the same things. But something about a drone I can't even see, knowing what's going on inside my body, makes me think we're already living in these science fiction novels.

image: Woody Allen on the couch in his 1977 film Annie Hall, BBC

Monday, September 25, 2017

Jellyfish Dreams

Signs of sleep seen in jellyfish
Sep 2017,

"It's the first example of sleep in animals without a brain."
-coauthor Paul Sternberg,  Howard Hughes Medical Institute (HHMI) Investigator at the California Institute of Technology,

image source

Bots Made Me Do It

Twitter bots for good: Study reveals how information spreads on social media
Sep 2017,

Emilio Ferrara, a USC Information Sciences Institute computer scientist and research assistant professor at the USC Viterbi School of Engineering's Department of Computer Science, and a team from the Technical University of Denmark.

39 bots deploy "positive-themed" hashtags to 25,000 Twitter users for four-months.

Information is much more likely to become viral when people are exposed to the same piece of information multiple times through multiple sources. "This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection," says Ferrara.

Bjarke Mønsted et al. Evidence of complex contagion of information in social media: An experiment using Twitter bots, PLOS ONE (2017). DOI: 10.1371/journal.pone.0184148

image source
image credit

Post Script:

Post from 5 years ago about this topic, check out Tim Hwang at the HOPE#9 conference talking about his ethically and legally dubious twitter-bot experiments on an unsuspecting cluster of 500 users:
Social Bots, Network Address, 2012

In case you were wondering the difference between robo- and -bot
Robo vs Bot, Network Address, 2013

Aaaaaand, why are we still not using the word "semibots?"
The Semibots Are Coming, Network Address, 2015

Friday, September 15, 2017

Man of the Year 2017

Look at him. He is the Pee-Wee Herman that you thought Pee-Wee Herman looked like after you found out he was a child molester (is that even true? No, I think he just got caught jerking off in public.)

Shkreli ordered jailed after online bounty on Hillary Clinton's hair
Reuters, Sep 2017

For sheer entertainment value alone this guy should get person of the year. He has added more priceless content to the interwebs than any other person (I guess this has been going on for more than a year, but we have to draw the line somewhere).

He raises drug prices so high that he basically kills dozens of people, and has absolutely no remorse. He goes to trial for securities fraud and tweets about how all the people involved are dumbasses, to the chastisement of his lawyer. He goes on trial and they can't even find a jury for him because he is so infamous for being a dirty asshole. His face is that of a sneaky shitbag (just look at him). His name, for christ's sake, is that of the sleeziest scumbag you ever heard of (Shkreli? Just say it out loud). He buys the secret and almost priceless Wu-Tang album for millions of dollars and then threatens to upload it to a torrent site so everyone can have it for free. I think he tried to pick a fight with Raekwon (am I making that up?). He uses twitter better than Donald Trump. (what does that even mean?)

I am definitely missing some things here, but we pause right after this - he puts a bounty on Hillary Clinton's hair, just one strand. You have got to be kidding me. He's serious, he's hilarious, he's preposterous, his moral compass is actually a piece of spin-art made on the boardwalk at the Jersey shore, his conscience is the evil, zero-fucks-epitome of all of us, manifest. He is the perfect child of corrupt capitalism, born fully formed from the head of the Merrill Lynch Bull.

In my series of coward-heroes, he is just the one I've been waiting for. First Jared Loughner, then Julian Assange, now this guy, Martin Shkreli. Man of the year, 2017.

Thursday, August 31, 2017

Milk Does a Body

Evolution of Adult Lactose Absorption

Farming, cheese, chewing changed human skull shape
Aug 2017,

The agricultural revolution put the human genome in the spin cycle of a washing machine. Plants, though fibrous, are easier to chew than animals. So during the thousands of years while we were discovering the magic of seed-sowing, and eating the results of our pre-science experimentation, our bodies (our jaws especially) were changing in response.

At this time in our development, the rate of change for a genome was equal to the rate of change of our culture, and our diet, and so the two were able to influence each other. Nowadays, those rates have changed so that our culture moves faster than the genome - i.e., just this week we saw the first 'living drug' that changes the patient's genome so that it attacks cancer.

Anyway, this is our history:

The largest changes in skull morphology were observed in groups consuming dairy products, suggesting that the effect of agriculture on skull morphology was greatest in populations consuming the softest food (cheese!).

I just have to rant for a moment about arguments for an extreme vegetarian diet. I'm all for being healthy and eating more vegetables and less animals, but to say that it is not natural for us to eat meat or dairy products is preposterous, and this is one of those reasons why. Furthermore, to conjure support for the rational adoption of a 'paleo' diet that mimcs the caveman's diet, because 'that is the diet that we evolved to eat', is also preposterous. A lot has happened since caveman days. Sure, the homo sapien is (?) years old (let's just say on the order of 100,000 years). But since that time, our genome has continued to change. Dairying practices are not 100,000 years old, but more like 10,000. And the impact this had on the homo sapien genome was tremendous.

We are the only mammal that maintains lactase into adulthood - that is a genetic modification that we did to ourselves. Nature did not do that to us; we did it. According to this article above, we also changed the shape of our skulls because of eating cheese.

So, in short, we are  complex creatures. We are more complex than any new diet can encapsulate. I'm not trying to tell anyone what to eat up in here. But I am saying that instead of asserting that there is something wrong with the way we are, and that we should try to be more like the way we were, the "natural way", is to ignore the unfathomably complex and epic journey we have made as a species, and that we should continue to make, as long as we don't all kill each other, and as long as the planet that gave us life doesn't go and kill us all.

Post Script
Here's a paper on lactase persistence, perhaps the greatest story in human culture ever, because it was the inflection point beyond which our culture moves faster than our genes can keep up with.
Evolution of lactase persistence: an example of human niche construction, 2011

Wednesday, August 23, 2017

The Natural Way

Robots eating robots.

'Cyborg' bacteria deliver green fuel source from sunlight
Aug 2017, BBC news

Scientists have created bacteria covered in tiny semiconductors [solar panels] that generate a potential fuel source from sunlight, carbon dioxide and water.

The so-called "cyborg" bugs produce acetic acid [vinegar], a chemical that can then be turned into fuel and plastic.

After combing through old microbiology literature, researchers realised that some bugs have a natural defence to cadmium, mercury or lead that lets them turn the heavy metal into a sulphide which the bacteria express as a tiny, crystal semiconductor on their surfaces.

Dr Kelsey Sakimoto from Harvard University in Massachusetts, US:
"We grow them and we introduce a small amount of cadmium, and naturally they produce cadmium sulphide crystals which then agglomerate on the outsides of their bodies."

They have an efficiency of around 80%, which is four times the level of commercial solar panels, and more than six times the level of chlorophyll.

The Energetically Autonomous Tactical Robot (EATR) was a project by Robotic Technology Inc. (RTI) and Cyclone Power Technologies Inc. to develop a robotic vehicle that could forage for plant biomass to fuel itself, theoretically operating indefinitely. It was being developed as a concept as part of the DARPA military projects for the United States military. [And so it eats dead bodies too]

Tuesday, August 22, 2017

Off the Grid

Brooklyn's social housing microgrid rewrites relationships with utility companies
Aug 2017, The Guardian

"Microgrids offer something that rooftop solar alone cannot: the ability to leave the grid entirely."

Until the shit goes down.

I am totally into the sentiment here, and I hope this triggers copycats all across the city, and every city. But there is no such thing as an island in the middle of a city; unless of course you're an actual island. After the power went out during Superstorm Sandy, there was not a single hot (powered) outlet in the city that didn't have something plugged into it. There is no way that a community like this, with its off-the-grid resilience, would be spared during an emergency of like proportions. They would be inundated by others trying to charge their phones, and their phone chargers, and their phone charger chargers.

If you want to be able to maintain power in such an emergency where everyone around you does not have it, you're gonna need a lil military to go with that power grid. A security defense system that keeps people out, and maybe even a way to protect the people who live there as they go out into the rest of the city, because people will be pretty pissed that you get power and they don't. This is all part of the glaring hole in prepper mentality - you may be able to prepare for you and your family, but you can't prepare for others. When the shit really goes down, the most dangerous thing you will face is not food shortages but other people.

Friday, August 18, 2017


Zipf's law, top ten most favorite thing on Network Address. New theory ---

Unzipping Zipf's Law: Solution to a century-old linguistic problem
Aug 2017,

Sander Lestrade, a linguist at Radboud University in The Netherlands, proposes a new solution to this notorious problem in PLOS ONE.

...shows that Zipf's law can be explained by the interaction between the structure of sentences (syntax) and the meaning of words (semantics) in a text.

"In the English language, but also in Dutch, there are only three articles, and tens of thousands of nouns," Lestrade explains. "Since you use an article before almost every noun, articles occur way more often than nouns." But that is not enough to explain Zipf's law. "Within the nouns, you also find big differences. The word 'thing', for example, is much more common than 'submarine', and thus can be used more frequently. But in order to actually occur frequently, a word should not be too general either. If you multiply the differences in meaning within word classes, with the need for every word class, you find a magnificent Zipfian distribution. And this distribution only differs a little from the Zipfian ideal, just like natural language does.


The most frequent word in a language, or in a book, or whatever, will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.

(straight from wikipedia, I mean it's all numbers anyway, right?)

For example, in the The Brown University Standard Corpus of Present-Day American English, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's Law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852). Only 135 vocabulary items are needed to account for half the Brown Corpus.

The same relationship occurs in many other rankings unrelated to language, such as the population ranks of cities in various countries, corporation sizes, income rankings, and so on.'s_law

*Zipf's law is referenced in Science Fiction author Robert J. Sawyer's www.wake, when the main character is searching for intelligent life on the web.


There's some other laws meta-physical, like Benford's Law:

In this distribution, the number 1 occurs as the first digit about 30% of the time, while larger numbers occur in that position less frequently, with larger numbers occurring less often: 9 as the first digit less than 5% of the time. This distribution of first digits is the same as the widths of gridlines on a logarithmic scale.

other meta-phys laws etc.

Network Address, 2012

Laws Meta-Physical
Network Address, 2013

Physicists eye neural fly data, find formula for Zipf's law
August 2014,

mathematical models, which demonstrate how Zipf's law naturally arises when a sufficient number of units react to a hidden variable in a system.

"If a system has some hidden variable, and many units, such as 40 or 50 neurons, are adapted and responding to the variable, then Zipf's law will kick in."

"We showed mathematically that the system becomes Zipfian when you're recording the activity of many units, such as neurons, and all of the units are responding to the same variable".

Ilya Nemenman, biophysicist at Emory University and co-author

Tuesday, August 15, 2017

Eyes on the Street

Computer 'anthropologists' study global fashion
Aug 2017,

What is the world wearing?

These scientists are using a deep learning object recognition program to discover visual patterns in clothing and fashion across millions of images of people worldwide and over a period of many years. They detected attributes like color, sleeve length, presence of glasses or hats, etc. (They end up filtering for only waist up photos). They ask questions such as, "How is the frequency of scarf use in the US changing over time?" or "For a given city, such as Los Angeles, what styles are most characteristic of that city."

The objective of this research is ultimately to "provide a look into cultural, social and economic factors that shape societies and provides insights into civilization."

Dashed lines mark Labor Day. Who said Americans don't like conformity?

via Cornell University: StreetStyle: Exploring world-wide clothing styles from millions of photos. arXiv.

I imagined that stuff like this is already happening all over the place, in all kinds of other fields, and being integrated into global policy decisions and bottom-line business calls alike. But, this is not the case; this is still just the beginning. One thing I caught from this, some digital era common sense - Google Trends results for "scarves" peak right before they do on Instagram, because, presumably, people are searching for the thing, then they buy it, then they take pictures of themselves wearing it.

Post Script
These are the real people, not the algorithms, that analyze and predict the world of fashion:
Color Conspirators, Network Address

Monday, August 7, 2017

Believability Likability Falliblity

Why humans find faulty robots more likeable
Aug 2017,

If you've never watched the robots from Boston Dynamics get pushed over while they try to stand, you really should. (just search robot fail videos). If you've never thought, aw man, I feel really bad for that guy, then you should watch definitely watch it. Because you know, one day when a real robot-looking robot is taking care of your feeble parents, or you, you're gonna want to like that robot. And as it turns out, watching something struggle, whether it's a robot or a bug, or ^this kid trying eat cereal, when you watch someone mess up, it makes you like them more.

Says science:

"...participants took a significantly stronger liking to the faulty robot than the robot that interacted flawlessly." ... This finding confirms the Pratfall Effect, which states that people's attractiveness increases when they make a mistake," says Nicole Mirnig, PhD candidate at the Center for Human-Computer Interaction, University of Salzburg, Austria.

Source document:
Nicole Mirnig et al, To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot, Frontiers in Robotics and AI (2017). DOI: 10.3389/frobt.2017.00021

Deanonymity Reanonymity

It is easy to expose users' secret web habits, say researchers
July 2017, BBC News

"Two German researchers say they have exposed the porn-browsing habits of a judge, a cyber-crime investigation and the drug preferences of a politician." -BBC

This isn't news. (So why am I writing about it?)

Despite what you might think, there is really no such thing as anonymous data, that is, when you have enough data.

Four data points is all it takes to identify or de-anonymize anonymous data, and this goes back to 2006. In other words, if I were to take a bunch of people and assign them serial numbers instead of their names and track every website they went to, all I would need is four websites from one particular serial number, and I would be able to identify who that individual is.

We forget so easily, but over ten years ago, AOL released a bunch of search data, and then took it back down the same day. They realized that you could pretty easily, no, very easily identify, or re-identify the people behind the search data. Then there was a competition to prove it, done on Netflix users, then Twitter users. Now, ten years later, we have already forgotten. Or perhpas, a tech writer at BBC is just looking for clicks. Or maybe he's just tyring to remind us.

There is no privacy on the internet.

On a positive note, your mom was right, you are special and unique and there's nobody else in the world exactly like you (and that's why it's so easy to re-identify your anonymized self).

AOL subscribers sue over data leak
Ars Technica, 2006

AOL Proudly Releases Massive Amounts of Private Data
Tech Crunch, 2006

How hard is it to 'de-anonymize' cellphone data?
MIT News, 2013

Unique in the Crowd: The privacy bounds of human mobility.
Yves-Alexandre de Montjoye, César A. Hidalgo, Michel Verleysen & Vincent D. Blondel. Scientific Reports 3, Article number: 1376 (2013). doi:10.1038/srep01376

The official paper:
Paul Ohm. Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review, Vol. 57, p. 1701, 2010
U of Colorado Law Legal Studies Research Paper No. 9-12.

image credit: link

Monday, July 31, 2017

Chatbots Start Speaking Their Own Language and It's not Esperanto

Facebook Shuts AI System After Bots Start Speaking Their Own Language, Defy Human Instructions
July 2017, Hindustan Times

Don't even get 'em started. As I tear through Kim Stanley Robinson's Auroroa, a hard science fiction novel about a starfleet trying to colonize the Tau Ceti system where the ship itself, due to its quantum-computer-powered artificial intelligence system, becomes conscsious, I read this headline.

I'm not surprised, nobody is surprised, that these chatbots, these intelligentities, have surpassed our ability to decode what the f they're doing. Deep learning neural networks, for example, are unintelligible to us (correct this, I read a paper recently about some folks successful in figuring out how to read the hidden programs developed by these learning networks). Computers, ultimately, speak a language of computation, 1's and 0's. So it should be no surprise that, given a complex task of negotiating mock global diplomacy matters, these systems tack at a better way for working with each other.

Still, it is symbolic. And only to humans do symbolic things matter. Maybe there's a reason for that; maybe this is the beginning. That distance between now and the inevitable transition to the post-human world keeps getting shorter.

Post Script
It Begins: Bots are learning to chat in their own language
July 2017, Cade Metz, WIRED

Wednesday, July 19, 2017

Musical Memetics

Genetic Data Tools Reveal How Pop Music Evolved In The US
The Physics arXiv Blog, 2015. link
Source document:
The Evolution of Popular Music: USA 1960–2010, 2015. link

I was into this article anyway, because it puts art and science together. Perhaps predictable for some - rap eventually takes over, dance music peaked in the 90's, country is NOT making a comeback (depends how they classify this, is there such a thing as 'new' country?).

But at the end, memetics rears its multi-headed head. Because the authors of this study used genetics-data analysis tools to do all this. Listen to all this memetalk:

"Musicians copy, repeat and modify song styles they like, this leads to a clear pattern of evolution over time. So it should come as no surprise that techniques developed for the analysis of genetic data should work on music data as well. “The selective forces acting upon new songs are at least partly captured by their rise and fall through the ranks of the charts,” they say."

Holy Bread

Salvador Dalí, Crucifixion (Corpus_Hypercubus), 1954

Dali wears bread on his head, 1958
Say What?
Vatican outlaws gluten-free bread for Holy Communion
July 2017, BBC

Bread used to celebrate the Eucharist during Roman Catholic Mass must not be gluten-free - although it may be made from genetically modified organisms, the Vatican has ruled.

In a letter to bishops, Cardinal Robert Sarah said the bread can be low-gluten. But he said there must be enough protein in the wheat to make it without additives.

The new rules are needed because the bread is now sold in supermarkets and on the internet, the cardinal said. [I don't understand; the Vatican sells its own brand of bread?]

Roman Catholics believe bread and wine served at the Eucharist are converted into the body and blood of Christ through a process known as transubstantiation.

Every once in a while I forget that this is a real thing, like, during this particular ceremony the bread is -really- turned into the body of Christ. Really. Really?

But wait, you're telling me that the same entity that thinks its bread is turning into flesh is also making decisions about the validity of genetically modified organisms. CRISPR vs Christ?

Art and AI

Frederic Bazille’s Studio 9 Rue de la Condamine (1870) and Norman Rockwell’s Shuffleton’s Barber Shop (1950)

When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed
The Physics arXiv Blog via Medium, Aug 2014
Source document: Toward Automated Discovery of Artistic Influence

There's some stirring in the dusty world of art history, with the rise of encultured robots threatening human livelihoods. A promising young algorithm is set upon the world, fed with centuries of art imagery, design principles, and historical documentation. Our little algorithm then grows up and learns how to identify patterns in the art world better than its teacher.

In the two compared images above, this little art-historian algo recognized similar compositional patterns that had never been seen before - a hidden Norman Rockwell, see above.

First of all, as an art history major in college, I look at all the compared/related images discovered by the AI, and I am not so impressed. Maybe the general concept is what fails to impress me. When you follow the art world long enough you get to know something about how influence works, and about the power that one thing can have on an artist's work. And I say that there is no such thing as one thing.

The nature of the artist is to take the world at large, a fuck-tonnery of pre-filtered miasma, and to make sense, or at least to fight with it in a way that leaves a record of the battle, and for the benefit of humankind. To say that one painting influenced another because they have similar stylistic elements or design principles is kind of silly. I do understand that subconscious influence has its way with the creative process. But that refers to life as well as art. The new style checker cab, or Triangle shirtwaists, or bubble tea or middle-hipster Americana folk music or The Beatles or African masks or even syphilis could influence an artists' work.
Charge of the Lancers - Umberto Boccioni - 1915
Take ^Futurism, for example. It is inspired by, among other things, the fragmentation of society, be it from national upheaval circa the World Wars, or from the way the landscape looks while riding a speeding train which propelled people faster past the countryside than they had ever moved before. How does an algorithm find that?

I heard Picasso's mistress Françoise Gilot, in her bio of Pable Picasso, say some of his lobster paintings were a response to her hard-shelled personality which came to a head prior to their separation. Algorithms can see that? Nah man.

I know someone can come on here and argue with me, successfully, that artists do influence each other in simple visual ways, and at times, the visual connections can supplement a lack of historical data surrounding their work. But still, there is a need for socio-biographical data in all this, and I wonder if our little algo could be even better trained.

Now, all this having been said, I just finished watching this: Davos talk about the future of artificial intelligence, with IBM CEO Ginni Rommetti. She says that the goal of IBM's artificial intelligence (Watson, by the way, in case you forgot) is to extend human faculties, not replace them. According to her premonitions, the art historian is not doomed, rather it will be enriched and extended by our algorithmic overlords.

Tuesday, July 18, 2017

Robots Have Feelings Too

aka In Other News Suicide is Funny Again

A robot kills itself, and everyone thinks it's funny:
Robot 'drowns' in fountain mishap
July 2017, BBC

This headline above was pretty tame. But otherwise, take your pick, I'll go with my local radio news station, WNYC 93.9 FM. Today on the six o'clock headlines they quip - "Turns out his first day on the job was too much for this robot..."

I thought suicide was a big deal. And what's up with the whole bullying thing? And don't even get me started on how they already assumed the thing's gender.

Very often when I think about the way people treat eachother, this quote by Carl Sagan comes to mind:
"It’s a little unfair, I think, to criticize a person for not sharing the enlightenment of a later epoch, but it is also profoundly saddening that such prejudices were so extremely pervasive. The question raises nagging uncertainties about which of the conventional truths of our own age will be considered unforgivable bigotry by the next."
Broca’s Brain, Carl Sagan, 1974-1979, p11

And I wonder when, if ever, we are living through such a prejudice in 'our own age'. When such a situation as this arises, how can you resist but to extrapolate? One day, far in the future, will we ridicule the newswriters of today for having no sympathy for this poor intelligentity?

But seriously, it seems pretty irresponsible to be poking fun at someone for committing suicide.

Obviously, a robot isn't "someone" and it didn't "commit suicide," but when it is portrayed that way in the headlines, I'll bet that's what it looks like to a young person, for example, or perhaps a person with mental illness. They hear that someone, or something, has killed itself, and they see that everyone thinks it's a joke.

Image source: Robot is Dead, Waldemar-Kazak, 2017

Friday, July 7, 2017

Try Not to Think

Study finds hackers could use brainwaves to steal passwords
Jul 2017,

It's been a while since I tested that EPOC Emotiv headset. It definitely worked, and that was over 5 years ago. Turns out that some people are really using it to play games, although I'm not sure how true this is.

It reads your brainwaves via electrical signal receivers that simply touch your head. Yes, there is electricity running through your brain, And yes that energy carries a signal that can be decoded and translated. Unfortunately, it's very limited. It can decipher up vs down, or left vs right, or any one thing vs another, but only if you trained it that way. You sit there and give it a baseline, you let the headset read your brain while you're thinking of "nothing" (def not as easy as it sounds). Then you train it to read anything other than nothing, and codes that as a command. If you want two commands, then you have to try and give it two very different patterns of thinking, so it can tell the difference, otherwise, it only knows on/off, thinking/not-thinking. Maybe the thing has come a long way and people really can use it to play games that require more than just one button.

Anyway, surprise surprise, it looks like you may be compromising the security of your own thoughts when you put on this brain reader; who knew?!

from the article:

"The team found that, after a user entered 200 characters, algorithms within the malicious software program could make educated guesses about new characters the user entered by monitoring the EEG data recorded. The algorithm was able to shorten the odds of a hacker's guessing a four-digit numerical PIN from one in 10,000 to one in 20 and increased the chance of guessing a six-letter password from about 500,000 to roughly one in 500."

image source:
photograph by Brad Miller, neurons in the cerebral cortex of a 6-day old rat, 40x magnification, 1996 Nikon Photomicrography Competition

Post Script
Trepanation is when you drill a hole into someone's head because that's where the problem is. 

Some nice illustrations about trepanation, by Scultetus

Crystals are the Future

No seriously, they are, especially these piezoelectric kinds. I clipped some things from this article just to keep us abreast.

LA crystals turn cars into energy source
July 2017, BBC News

"Piezoelectricity is not new technology - one of the most common applications is electric cigarette lighters which use piezoelectric crystals to create a flame. The electric lighters for barbecues use the same technology.

"Piezoelectric crystals generate an electrical charge when compressed and scientists estimate that if they were positioned on a 10-mile stretch of highway they could generate enough electricity to power the city of Burbank, which has a population of more than 100,000.

"Since 2009, all the displays at the East Japan Railway Company's Tokyo station have been powered by people walking on floor tiles that utilise piezoelectricity.

"And start-up PaveGen has put similar technology beneath the floor of a football pitch in one of Rio de Janeiro's most notorious favelas to offer night-time floodlights powered by footsteps. It means children can play at night rather than hang out in gangs."

image source
Karl Deckart, Salt Crystals 10x magnification, 1996 Nikon Photomicrography Competition

Tuesday, July 4, 2017

Natural vs Artificial

So if we enslave mountains of bacteria to be forcefed glucose while we reap their colorful shit, is that natural or artificial?

Four strains of bacteria work together to produce pigment for food and cosmetics industry
Jul 2017,

Researchers at Rensselaer Polytechnic Institute have shown that four strains of E. coli bacteria working together can convert sugar into the natural red anthocyanin pigment found in strawberries, opening the door to economical natural colors for industrial applications.

"We feed the bacteria glucose and they do the rest."
-Mattheos Koffas, a professor of chemical and biological engineering at Rensselaer, and member of the Center for Biotechnology and Interdisciplinary Studies

image credits: Air bubbles formed from melted ascorbic acid (vitamin C) crystals (50x), Marek Miś, 2016 Nikon Small World Micrography Competition

To Shape the Future

Crystal balls over here

Predicting the future with the wisdom of crowds
Jun 2017,

Don Moore and a team of researchers found a new way to improve that outcome by training ordinary people to make more confident and accurate predictions over time as superforecasters.

The team, working on The Good Judgment Project, had the perfect opportunity to test its future-predicting methods during a four-year government-funded geopolitical forecasting tournament sponsored by the United States Intelligence Advanced Research Projects Activity. The tournament, which began in 2011, aimed to improve geopolitical forecasting and intelligence analysis by tapping the wisdom of the crowd. Moore's team proved so successful in the first years of the competition that it bumped the other four teams from a national competition, becoming the only funded project left in the competition.

The study differs from previous research in overconfidence in forecasting because it examines accuracy in forecasting over time, using a huge and unique data set gathered during the tournament. That data included 494,552 forecasts by 2,860 forecasters who predicted the outcomes of hundreds of events.

The Wisdom of Crowds
James Surowiecki, 2004

Swarm A.I. Correctly Predicts the Kentucky Derby, Accurately Picking all Four Horses of the Superfecta at 540 to 1 Odds
Yahoo Finance, April 2016

The Chinese Flesh Engine
BBC 2014


This kind of thing always reminds me of a passage from Levy-Bruhl's Primitive Mentality - the indigenous people he writes about are perplexed at the ability of the white scientists to "predict" a lunar eclipse.

They live in a timeless world. Hence, for example, an omen doesn’t just reveal what will happen, it is evidence that it is already happening.

They ask of the whites – how could you predict it (a lunar eclipse) if it was not you who caused it?

I have also found, among some folks who have less of a functioning prefrontal cortex if you know what I'm saying will tend to blame the person who predicts the situation as if they caused it. Take for example, an angsty adolescent - you tell them not to do something because of some probable result (don't smoke pot in the high school bathroom because you'll probably get caught) they will blame you as if your prediction actually caused the outcome. Some of us are no different from Levy-Bruhl's "primitives".

Primitive Mentality, Lucien Levy-Bruhl, 1923, trans 1966

Primitive Mentality
Network Address 2012

Monday, July 3, 2017

So Easy a Caveman Could Do It

I would like to take a moment here to mourn the passing of an opportunity, and one that may not come again for too long. We are talking here about the Geico Caveman, debuted in 2004ish, gone within a few years, and never to be seen again, except by those who have that strong sense of maintaining cultural posterity.

I really believe that, to this day, the Geico Caveman stunt is the most culturally sophisticated thing ever produced by American consumerist society (slight hyperbole). And so, for those who didn't catch it the first time around: The Geico Caveman had nothing to do with Geico car insurance; he was created simply as a funny character, and a preposterous idea, to help you remember Geico the next time you think you should be saving money on your car insurance.

Really there were a bunch of cavemen, still living in modern society, and they were pissed because of Geico's temporary tagline, "so easy even a caveman could do it [use the Geico website to sign up for car insurance]". The cavemen were offended because the ad implies that they're stupid, but they're humans just like everyone else, so why should they be singled out?? They were pissed and demanded an apology in successive commercials. They even got their own tv show, a sitcom, I think it lasted like one episode.

It was created as a stab at the current state of political correctness. I should remind folks this was happening at the same time Queer Eye for the Straight Guy came out (no pun, well maybe a little pun), a tv show about gay guys helping straight guys to comb their hair and brush their teeth. This is also the same time the word 'metrosexual' entered the mainstream lexicon, referring to a straight man who takes on the visual aesthetics of a gay man, but without the homophobia. This was also the same time the NJ Governor Jim McGreevy came out, as gay, and which led to the phrase "no homo", as in "yo I really like that new tattoo on your inner thigh bro, no homo".

So yes, there we were, in a heightened state of social awareness, at least in regards to sexuality. The tide is turning, it might actually be ok to be gay, or god forbid, black (US first black president isn't elected for another 4 years).

But then what happened. It's now 2017 and it's not even ok to be a woman anymore for f sake.

All I'm saying is, that caveman may have been our way out. You can make fun of cavemen all you want! They're stupid, they're hairy, they smell, they went extinct for christ's sake; they are inferior! They are the only thing we can make fun of that is enough 'like us' to make it hurt, but are enough -not- like us to make it feel good when we make fun of them. I really think the caveman should be adopted as the pan-cultural hate symbol. People can't not hate; they have to hate something. White people gonna hate black people, straight people gay people, rich people poor people, society is gonna subconsciously hate women, but openly hate men, we can't not hate. So why can't we all just hate cavemen? It shows how stupid we are to hate on people in the first place, and it doesn't hurt anybody, because CAVEMEN ARE ALREADY EXTINCT.

I thought the caveman was our way out, an escape hatch from our incessant need to find difference between 'us and them'. We dropped the ball. Maybe we'll get a second chance. BUT we better hurry up, before we Jurassic-Park an actual caveman, and then we'll never be able to make fun of them again!

Post Script:
Why are there no cavewomen?

Post Post Script:
Check out this movie about one kind of caveman showing another kind of caveman how to make fire. Nobody speaks English, yet they all have names with anglicized spellings (according to the credits at the end). And fyi, Rae Dawn Chong was one hot cavewoman.

Sunday, June 18, 2017

Psychedelic Time Lapse

Any story about LSD experiments from the 50's is urban legend by now and should be taken with a grain of salt, or a microgram, as it were. There's the video of the dosed soldiers climbing trees and wrapping themselves in the field-telephone cord, that's a good one, and it's on video, so I guess that's less legend and more real.

Here is a series of pictures that was supposedly done by an artist after taking a dose of LSD. Maybe they were done by dozens of artists over the course of many years and even by different scientists, and only the best were chosen to narrate this story. Maybe the whole thing is made up! Who cares!?


1.  --  0 hr 20 min
Patient chooses to start drawing with charcoal. The subject of the experiment reports - 'Condition normal... no effect from the drug yet'.

2.  --  1 hr 30 min
The patient seems euphoric. 'I can see you clearly, so clearly. This... you... it's all... I'm having a little trouble controlling this pencil. It seems to want to keep going.'

3.  --  2 hr 30 min
Patient appears very focused on the business of drawing. 'Outlines seem normal, but very vivid - everything is changing colour. My hand must follow the bold sweep of the lines. I feel as if my consciousness is situated in the part of my body that's now active - my hand, my elbow... my tongue'.

4.  --  2 hr 32 min
Patient seems gripped by his pad of paper. 'I'm trying another drawing. The outlines of the model are normal, but now those of my drawing are not. The outline of my hand is going weird too. It's not a very good drawing is it? I give up - I'll try again...'

5.  --  2 hr 35 min
Patient follows quickly with another drawing. 'I'll do a drawing in one flourish... without stopping... one line, no break!' Upon completing the drawing the patient starts laughing, then becomes startled by something on the floor.

6.  --  2 hr 45 min
Patient tries to climb into activity box, and is generally agitated - responds slowly to the suggestion he might like to draw some more. He has become largely non verbal. 'I am... everything is... changed... they're calling... your face... interwoven... who is...' Patient mumbles inaudibly to a tune (sounds like 'Thanks for the memory'). He changes medium to Tempera.

7.  --  4 hr 25 min
Patient retreated to the bunk, spending approximately 2 hours lying, waving his hands in the air. His return to the activity box is sudden and deliberate, changing media to pen and water colour.) 'This will be the best drawing, like the first one, only better. If I'm not careful I'll lose control of my movements, but I won't, because I know. I know' - (this saying is then repeated many times) Patient makes the last half-a-dozen strokes of the drawing while running back and forth across the room.

8.  --  5 hr 45 min
Patient continues to move about the room, intersecting the space in complex variations. It's an hour and a half before he settles down to draw again - he appears over the effects of the drug. 'I can feel my knees again, I think it's starting to wear off. This is a pretty good drawing - this pencil is mighty hard to hold' - (he is holding a crayon).

9.  --  8 hr 0 min
Patient sits on bunk bed. He reports the intoxication has worn off except for the occasional distorting of our faces. We ask for a final drawing which he performs with little enthusiasm. 'I have nothing to say about this last drawing, it is bad and uninteresting, I want to go home now.'


Check out this series, again debatable authenticity, by an artist who developed mental illness, but continued to do cat drawings all his life...

Louis Wain and the Evolution of Schizophrenia
2013, Network Address

Comedy of the Commons

Balinese rice patties

Fractal patterns

Fractal planting patterns yield optimal harvests, without central control
Jun 2017,

Balinese rice farmers make some crazy patterns with their rice fields, but they don't do this on purpose. The rice fields plant themselves in this pattern, using the rice farmers. Just kidding, or not.

These farmers are all part of the same group, using the same resources, that being their rice patties. They plant their rice based on a whole bunch of variables, including the planting patterns of the other farmers who share the patties, and the amount of water flowing down the river. All of these variables are interdependent, such that the farmers in one area may change the amount of water in the river depending on when they plant, which in turn changes when other farmers will plant.

All of this decision-making, however, does not go through a centralized process, and although the farmers are making their own decisions, the final pattern of planted rice fields was not decided by them alone, but by the interaction and feedback of the system as a whole.

from the article:

"What is exciting scientifically is that this is in contrast to the tragedy of the commons, where the global optimum is not reached because everyone is maximizing his individual profit. This is what we are experiencing typically when egoistic people are using a limited resource on the planet, everyone optimizes the individual payoff and never reach an optimum for all," he says.

The scientists find that under these assumptions, the planting patterns become fractal, which is indeed the case as they confirm with satellite imagery. "Fractal patterns are abundant in natural systems but are relatively rare in man-made systems," explains Thurner. These fractal patterns make the system more resilient than it would otherwise be. "The system becomes remarkably stable, again without any planning—stability is the outcome of a remarkably simple but efficient self-organized process. And it happens extremely fast. In reality, it does not even take ten years for the system to reach this state," Thurner says.

The Tragedy of the Commons

Photonic Cognition

Researchers investigate decision-making by physical phenomena
Jun 2017,

The Illusion of Control is a common theme here on Network Address. So every once in a while we see something  about how things inhuman, and things not even alive, are making decisions just like us. This forces us to consider whether "we" make decisions at all.

From the article:

"Decision-making is typically thought of as something done by intelligent living things and, in modern times, computers. But over the past several years, researchers have demonstrated that physical objects such as a metal bar, liquids, and lasers can also "make decisions" by responding to feedback from their environments. And they have shown that, in some cases, physical objects can potentially make decisions faster and more accurately than what both humans and computers are capable of.

"In a new study, a team of researchers from Japan has demonstrated that the ultrafast, chaotic oscillatory dynamics in lasers makes these devices capable of decision making and reinforcement learning, which is one of the major components of machine learning. To the best of the researchers' knowledge, this is the first demonstration of ultrafast photonic decision making or reinforcement learning, and it opens the doors to future research on "photonic intelligence."

"In our demonstration, we utilize the computational power inherent in physical phenomena," coauthor Makoto Naruse at the National Institute of Information and Communications Technology in Tokyo told

[and by the way, when we take this further, like to the inevitable AI overlord conclusion...]

"Such systems provide huge potential for our future intelligence-oriented society. We call such systems 'natural Intelligence' in contrast to artificial intelligence."

"In experiments, the researchers demonstrated that the optimal rate at which laser chaos can make decisions is 1 decision per 50 picoseconds (or about 20 decisions per nanosecond)—a speed that is unachievable by other mechanisms. With this fast speed, decision making based on laser chaos has potential applications in areas such as high-frequency trading, data center infrastructure management, and other high-end uses.

Saturday, June 17, 2017

Arts Squabble

Occupy Wall Street, September 2011. Barricades remained for three years following the protests.

Urinating dog joins Wall Street statue row
May 2017, BBC

The Merrill Lynch 'Charging Bull' is the focus of some public/art controversy, after someone put a sculpture of a 'fearless girl' staring-down the bull, and then someone else put a 'pissing pug,' pissing on the girl. A timeline should help:

Stock market crash.

Artist Arturo Di Modica puts the 7,000-pound bronze bull right there in front of the New York Stock Exchange without telling anyone or getting any permission. (That's called guerilla art). Later that day it gets removed by police, placed in an impound lot, but later reinstalled a couple blocks away. The bull was meant to symbolize financial optimism and prosperity.

March 2017
Kristen Visbal puts Fearless Girl right in front ot the Bull. The Girl was commissioned by State Street Global Advisers for a fund on the market that considers itself gender-diverse. The Bull was paid for by Di Modica, the artist himself. The Girl was paid for by a firm. Di Modica calls it a dis to his bull, an act of commerce, not of altruism. She, the Girl, was commissioned to highlight gender inequality, but she was paid for by a firm that trades on the stock exchange.

May 2017
Alex Gardega makes a little dog sculpture, and has it pissing on the girl. He calls the Girl "corporate nonsense."

It's really hard to argue that the statue of the girl has to do with gender inequality when you look at who commissioned it.

Unfortunately, because gender equality IS an issue, lots of people get lots of pissed when the girl gets dissed. Un-further-fortunately, because income inequality and corporate takeover are an even bigger problem than gender inequality, people are more upset that the symbolic girl is getting pissed on by a dog than they are that we as a society are getting pissed on by the entities that constitute the stock market, for example.

It does seem like a cheap trick, the Girl, that is.

But let's not forget that the Bull was already the voodoo doll of the Occupy Wall Street movement back in 2011, such that it was protected by barricades for three years following the protest. It represents the thing that has shaken the moral compass of Western society - corporate greed and power.

Personally, I would really, really, really like to see a bronze barricade placed around that bull.

image source: link

Organs Galore

New lung 'organoids' in a dish mimic features of full-size lung
May 2017,

"Organoids are 3-D structures containing multiple cell types that look and function like a full-sized organ. By reproducing an organ in a dish, researchers hope to develop better models of human diseases, and find new ways of testing drugs and regenerating damaged tissue."

And for making distributed intelligence composite humans that can float in a tank on a spaceship that gets blinked to Proxima Centauri.

image source: link

Friday, June 16, 2017

The Metabolism of the Anthroposphere

We’re looking at a study here, where social network activity is measured, and in turn, used to predict the level of physical damage to a location (due, for example, to a natural weather disaster)

The main conclusion of the study was obtained when the data relating to social network activity was examined alongside data relating to both the levels of aid granted by the Federal Emergency Management Agency (FEMA) and insurance claims: there is a correlation between the mean per capita of social network activity and economic damage per capita caused by these disasters in the areas where such activity occurs. In other words, both real and perceived threats, along with the economic effects of physical disasters, are directly observable through the strength and composition of the flow of messages from Twitter.

March 2016,

Image source: link


Other Network Address-ing on sociothermodynamics:

For the etymological origins of the anthropospher:
see Pierre Teilhard de Chardin’s noosphere

And just in case you thought I made up this term, there is a book by the same title, very informative and an eye-opening read for anyone interested in what humans do:
The Metabolism of the Anthroposphere, 2nd ed. Peter Baccini and Paul H. Brunner. MIT, 2012.

Overview from the publisher’s website:
Over the last several thousand years of human life on Earth, agricultural settlements became urban cores, and these regional settlements became tightly connected through infrastructures transporting people, materials, and information. This global network of urban systems, including ecosystems, is the anthroposphere; the physical flows and stocks of matter and energy within it form its metabolism. This book offers an overview of the metabolism of the anthroposphere, with an emphasis on the design of metabolic systems. It takes a cultural historical perspective, supported with methodology from the natural sciences and engineering. The book will be of interest to scholars and practitioners in the fields of regional development, environmental protection, and material management. It will also be a resource for undergraduate and graduate students in industrial ecology, environmental engineering, and resource management.

The authors describe the characteristics of material stocks and flows of human settlements in space and time; introduce the method of material flow analysis (MFA) for metabolic studies; analyze regional metabolism and the material systems generated by basic activities; and offer four case studies of optimal metabolic system design: phosphorus management, urban mining, waste management, and mobility.

This second edition of an extremely influential book has been substantially revised and greatly expanded. Its new emphasis on design and resource utilization reflects recent debates and scholarship on sustainable development and climate change.


And for the speculative fiction novel about the anthroposphere, see here:
Mass Transference Device, 2012.

In this story, humanity is headed for an end point, like the Big Bang, but in reverse, and for humans only. Humanity can avoid this moment of absolute concentration (or do they only speed its advance) by replacing “themselves” in the world with their self-replicates, and then by themselves going backwards through the trajectory of progress. From that point on, humans “progress backwards”, becoming less and less reliant on technology and approaching the original collective consciousness we were all part of before we became individuals (which is not much different than the anthroposphere concept of our future, as presented in the story, only it would be happening in reverse).

This transition is especially difficult because humans, by approximately the year 2070 will have bred out of themselves the ability to live without their anthropospheric bubble. They need, somehow, to breed back into their race, the ability to live like they used to (in the days of the early 21st century).

It is the written thought of his ancestors that Hassam Flessihfo uses to help him make this backwards transition. Together with his partner he passes on his reformed “genes” to his son Samm Ashcroftt, who in turn becomes the first human born with the ability to survive in complete independence of the anthroposphere.