Friday, July 9, 2021

Writing Robots 3

OpenAI (competitor to DeepMind) created a text generator that basically wipes the need for human writers. Or to be a bit less hyperbolic, it makes it all but impossible for you to discriminate between human writing and robot writing. 

It's called GPT, for "Generative Pre-trained Transformer". An older version, GPT-2, was so good that it wasn't released for fear that it would be used for malicious purposes (i.e., fake news). Now we're on GPT-3, which will be released, and is 100 times more powerful. But no need for alarm, OpenAI intends to "prevent misuse by limiting access to approved customers and use cases" (though it's now been exclusively licensed to Microsoft).

For comparison purposes, this is a much earlier version of an artificial writing program -- The Policeman's Beard Is Half-Constructed (1984).

At last, I wonder, what does this mean for the blogosphere, of which Network Address has been a part  for over 10 years now. Will it be completely diluted by an entire universe of fake blogs? And then I realize, who cares, nobody reads this blog except for robots anyway (i.e., spiderbots).

AI tool summarizes lengthy papers in a sentence
Jan 2021, phys.org
Semantic Scholar is notable for achieving the greatest compression rate of all summarizing tools; powered by AI and used for scientific research. With its new summarization feature, it surveys massive numbers of scientific research papers and reduces them to one-sentence summaries. 
It began as an AI-fueled dungeon game
May 2021, Wired via Ars Technica

AKA "GPT-3 text generator video game generates sex scenes involving children"

You know the story by now. Just ask Tay, the Microsoft Chatbot who should have been called the n*gg**bot because it's entire lexicon defaulted to saying the n-word over and over, with a little bit of Holocaust denial sprinkled in for good measure. (See Ars reporting on that.)

Also though, complaining about your "8-year old laptop" while playing this game will get you shadowbanned. Who knew robots were that sensitive.

A college kid created a fake, AI-generated blog -- It reached #1 on Hacker News
Dec 2020, MIT Technology Review

And this is exactly what they thought would happen when its release was withheld:
The lab gave the algorithm to select researchers who applied for a private beta, with the goal of gathering their feedback and commercializing the technology by the end of the year.

Porr submitted an application. He filled out a form with a simple questionnaire about his intended use. But he also didn’t wait around. After reaching out to several members of the Berkeley AI community, he quickly found a PhD student who already had access. Once the graduate student agreed to collaborate, Porr wrote a small script for him to run. It gave GPT-3 the headline and introduction for a blog post and had it spit out several completed versions. Porr’s first post (the one that charted on Hacker News), and every post after, was copy-and-pasted from one of the outputs with little to no editing.

The trick to generating content without the need for much editing was understanding GPT-3’s strengths and weaknesses. “It's quite good at making pretty language, and it's not very good at being logical and rational,” says Porr. So he picked a popular blog category that doesn’t require rigorous logic: productivity and self-help. [hilarious]

Porr says his experiment also shows a more mundane but still troubling alternative: people could use the tool to generate a lot of clickbait content. “It's possible that there's gonna just be a flood of mediocre blog content because now the barrier to entry is so easy,” he says. “I think the value of online content is going to be reduced a lot.”
Post Script:
It doesn't stop with writing, it can also generate imagery:
New module for OpenAI GPT-3 creates unique images from text
Jan 2021, phys.org
A team of researchers at OpenAI, a San Francisco artificial intelligence development company, has added a new module to its GPT-3 autoregressive language model. Called DALL·E, the module excerpts text with multiple characteristics, analyzes it and then draws a picture based on what it believes was described

The system is able to create images by using a corpus of information consisting of internet pages. Each part of the text is researched in an attempt to learn what it should look like. For the previous example, it would search for and analyze thousands of pictures of dogs. Then it would study cats, and what their claws look like, and then birds and their tails. Then, it combines the results into several graphic images to give users a variety of results.

via OpenAI: DALL·E: Creating Images from Text: openai.com/blog/dall-e/

No comments:

Post a Comment