Sunday, December 8, 2019

Artificial Impressionism


Fake news via OpenAI - Eloquently incoherent?
Nov 2019, phys.org

Robots slowly taking over. Give them a sentence and they can now keep it going for a few more sentences, but after that it gets stupid.

So you can give it a fake headline, and it will generate the first line of the story, but after that things will start to fall apart.

Good thing the targets for engineered memetic propagation are not trying to read past the first sentence!

Post Script
Researchers develop a method to identify computer-generated text
July 2019, phys.org



In the above 3 images, the first is a chunk of text written by a robot (most of the words are green, with a few yellows sprinkled in), the second is a real New York Times article (only half is green, the rest is yellow, with some red, and a sprinkle of purple) and the third picture is a clip from "the most unpredictable human text ever written", James Joyce's Finnegan's Wake (the colors green, yellow, red, purple are all evenly distributed about the page).

Green words are very predictably the next word. Yellow words are less likely to show up after the word they show up after. And red and purple are for when the next word is something you absolutely did not expect.

Because text-writing algorithms today use a statistical correlation program based on a compendium of written language (so they know what words typically occur together) the output of such algos will tend to look like the topmost image with all green words. The algos can't think for themselves, they can't 'come up with' new stuff, and they can't be unpredictable. The whole point of writing an algorithm to do this is to prescribe what it's going to do in advance, i.e., it's predictable.

Anyway, soon we won't be writing our robots to write like that. They'll use less predictable programs to generate their text, with unpredictability and random association thrown in there on purpose.

No comments:

Post a Comment