Wednesday, January 15, 2025

Bodying in the 21st Century


Disney investigating massive leak of internal messages
Jul 2024, BBC News

Disney has confirmed it is investigating an apparent leak of internal messages by a hacking group, which claims it is "protecting artists' rights".

The group, Nullbulge, said it had gained access to thousands of communications from Disney employees and had downloaded "every file possible".

Nullbulge's website says the group targets anyone it believes is harming the creative industry by using content generated by artificial intelligence (AI), which it describes as "theft".

Nullbulge describes itself as "a hacktivist group protecting artists' rights and ensuring fair compensation for their work".

Partially related image credit: This is an image taken from an nj.com article on porch pirates in 2024, and is just a great example of terrible staged photos, which is what happens when we don't give the arts the respect it deserves.

Phantom data could show copyright holders if their work is in AI training data
Jul 2024, phys.org

"Taking inspiration from the map makers of the early 20th century, who put phantom towns on their maps to detect illicit copies, we study how injection of 'copyright traps'—unique fictitious sentences—into the original text enables content detectability in a trained LLM."

via Imperial College London: Matthieu Meeus et al, Copyright Traps for Large Language Models, arXiv (2024). DOI: 10.48550/arxiv.2402.09363

Public Service Announcement: "AI companies are increasingly reluctant to share any information about their training data. While the training data composition for GPT-3 and LLaMA (older models released by OpenAI and Meta AI respectively) is publicly known, it is no longer the case for the more recent models GPT-4 and LLaMA-2."


Video game strike - Online games could be first to be hit
Aug 2024, BBC News

Members of union SAG-Aftra, which represents approximately 160,000 performers, recently staged a picket outside the offices of Warner Bros, one of 10 game companies negotiating with the union.
They say their offer gives workers "meaningful protections" but SAG-Aftra disagrees.

"AI technology lets these companies put your face, your voice, your body into something that you may not even have agreed to," says Duncan.


FBI busts musician’s elaborate AI-powered $10M streaming-royalty heist
Sep 2024, Benji Edwards for Ars Technica but originally reported from New York Times

Smith's scheme, which prosecutors say ran for seven years, involved creating thousands of fake streaming accounts using purchased email addresses. He developed software to play his AI-generated music on repeat from various computers, mimicking individual listeners from different locations. In an industry where success is measured by digital listens, Smith's fabricated catalog reportedly managed to rack up billions of streams.

To avoid detection, Smith spread his streaming activity across numerous fake songs, never playing a single track too many times. He also generated unique names for the AI-created artists and songs, trying to blend in with the quirky names of legitimate musical acts. Smith used artist names like "Callous Post" and "Calorie Screams," while their songs included titles such as "Zygotic Washstands" and "Zymotechnical."

...2018 when he partnered with an as-yet-unnamed AI music company CEO and a music promoter to create a large library of computer-generated songs.


New tool makes songs unlearnable to generative AI
Oct 2024, phys.org

Reminder: 
"Most of the high-quality artworks online are copyrighted, but these companies can get the copyrighted versions very easily. Maybe they pay $5 for a song, like a normal user, and they have the full version. But that purchase only gives them a personal license; they are not authorized to use the song for commercialization."

Companies will often ignore that restriction and train their AI models on the copyrighted work. 
...

HarmonyCloak - makes musical files unlearnable to generative AI models without changing how they sound to humans. [link

"Our idea is to minimize the knowledge gap [between new information and their existing knowledge] ourselves so that the model mistakenly recognizes a new song as something it has already learned. That way, even if an AI company can still feed your music into their model, the AI 'thinks' there is nothing to learn from it."

via Department of Electrical Engineering and Computer Science at University of Tennessee at Knoxville and Lehigh University: They will present their research at the 46th IEEE Symposium on Security and Privacy (S&P) in May 2025.

No comments:

Post a Comment