Sometimes you have to give it to these scientists, the stuff they come up with is pretty smart.
Who wrote this? Engineers discover novel method to identify AI-generated text
Mar 2024, phys.org
First, an interesting note:
"Stubbornness" is when LLMs show a tendency to alter human-written text more readily than AI-generated text, and it happens because LLMs often regard AI-generated text as already optimal and thus make minimal changes.
Next, the purpose:
Raidar (geneRative AI Detection viA Rewriting) - identifies whether text has been written by a human or generated by AI or LLMs, without needing access to a model's internal workings.
Finally, the clever part:
It uses a language model to rephrase a given text and then measures how many edits the system makes to the given text. Many edits mean the text is likely written by humans, while fewer modifications mean the text is likely machine-generated.
via Columbia University School of Engineering and Applied Science: Chengzhi Mao et al, Raidar: geneRative AI Detection viA Rewriting, arXiv (2024). DOI: 10.48550/arxiv.2401.12970
Mostly unrelated image credit: AI Art - A Watercolor Drawing of Cheerleaders - 2024
Random robots are more reliable: New AI algorithm for robots consistently outperforms state-of-the-art systems
May 2024, phys.org
Maximum Diffusion Reinforcement Learning (MaxDiff RL) - an algorithm that encourages robots to explore their environments as randomly as possible in order to gain a diverse set of experiences; "designed randomness"; improves the quality of the data collected
If the robots move randomly, instead of some highly calculated, optimized trajectories, somehow the resulting data they collect on the world around them is better. Like when randomness is the base, it makes way better structures. I'm immediately thinking of watching a baby learn to move their body parts, or their vocal chords; underneath those first recognizable attempts is an endless iteration of random movements that are sometimes just now starting to get it right.
via Northwestern McCormick School of Engineering: Maximum diffusion reinforcement learning, Nature Machine Intelligence (2024). DOI: 10.1038/s42256-024-00829-3
Post Script: It's funny to think of this, the MaxDiff RL, as an "algorithm", since it's kind of getting rid of any algorithms, that's the point here. When the algorithm is random, it's not an algorithm anymore; randomness is the anti-algorithm?
Researchers test AI systems' ability to solve the New York Times' connections puzzle
May 2024, phys.org
Chain of thought prompting:
The researchers found that explicitly prompting GPT-4 to reason through the puzzles step-by-step significantly boosted its performance to just over 39% of puzzles solved."Our research confirms prior work showing this sort of 'chain-of-thought' prompting can make language models think in more structured ways. Asking the language models to reason about the tasks that they're accomplishing helps them perform better."
via NYU Tandon School of Engineering: Graham Todd et al, Missed Connections: Lateral Thinking Puzzles for Large Language Models, arXiv (2024). DOI: 10.48550/arxiv.2404.11730
New ransomware attack based on an evolutional generative adversarial network can evade security measures
Jun 2024, phys.org
GAN-based architectures consist of two artificial neural networks that compete against each other to generate increasingly "better" results on a specific task.
You already know it as the way we get hyperrealistic image generation or convincing conversation from a robot, and it's now being used to make malware attacks more effective.
These scientists tested a version of this attack-enhancing approach, and found their framework capable of bypassing the majority of available anti-virus systems.
via Texas A&M University and Ho Technical University: Daniel Commey et al, EGAN: Evolutional GAN for Ransomware Evasion, 2023 IEEE 48th Conference on Local Computer Networks (LCN) (2023). DOI: 10.1109/LCN58197.2023.10223320
New technique improves the reasoning capabilities of large language models
Jun 2024, phys.org
Their approach, called natural language embedded programs (NLEPs), involves prompting a language model to create and execute a Python program to solve a user's query, and then output the solution as natural language.
NLEPs also improve transparency, since a user could check the program to see exactly how the model reasoned about the query and fix the program if the model gave a wrong answer.
via MIT: Tianhua Zhang et al, Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning, arXiv (2023). DOI: 10.48550/arxiv.2309.10814
No comments:
Post a Comment