But First:
Stressed plants emit airborne sounds that can be detected from more than a meter away
Mar 2023, phys.org
Tomato and tobacco plants that are stressed -- from dehydration or having their stems severed -- emit sounds that are comparable in volume to normal human conversation, and sound like bubble wrap, but in a fequency we can't hear, at 30–50 clicks per hour at random intervals.
via Tel Aviv University: Lilach Hadany, Sounds emitted by plants under stress are airborne and informative, Cell (2023). DOI: 10.1016/j.cell.2023.03.009.
Completely unrelated image credit: The OMEGA Laser - J Adam Fenster and University of Rochester Laboratory for Laser Energetics - 2023
New research shows how cultural transmission shapes the evolution of music
Mar 2023, phys.org
The Max Planck Institute for Empirical Aesthetics; their study design was awesome:
The researchers used singing experiments to study music evolution in an unprecedented detail: testing the evolution of more than 3,400 melodies sung by around 1,800 participants from India and North America. To simulate music evolution, they used a method similar to the classic game of "broken telephone," where messages are passed from one participant to the next.In this case, melodies had to be passed from one to the next by singing. Over time, participants make mistakes in their efforts to repeat the melodies they hear, which gradually shapes the evolution of music towards melodies that are appealing and easy to learn by everyone.They call it the transmission bias, and they found US participants tended to produce melodies that were biased towards certain cultural conventions of Western music, whereas Indian participants showed a bias towards common Indian scales.
via Max Planck Institute for Empirical Aesthetics: Manuel Anglada-Tort et al, Large-scale iterated singing experiments reveal oral transmission mechanisms underlying music evolution, Current Biology (2023). DOI: 10.1016/j.cub.2023.02.070
Here comes the sun: New study shows how UK weather conditions influence music success in the markets
May 2023, phys.org
The research, which analyzed over 23,000 songs that reached the UK weekly top charts in the last 70 years, found that songs that were energetic, danceable, and evoked positive emotions such as joy and happiness were positively associated with warm and sunny weather and negatively associated with rainy and cold months. Similarly, energetic and positive music varied according to expected seasonal patterns in the UK, increasing in summer and decreasing in winter.BUT - hyper popular songs exhibited the strongest associations with weather fluctuations, less popular songs showed no relationship at all.AND - only music features reflecting high intensity and positive emotions were associated with weather conditions, whereas music features reflecting low intensity and negative emotions were not related to weather at all. This suggests that negative emotional states may be more influenced by individual situational factors rather than general environmental conditions.
via University of Oxford: Manuel Anglada-Tort et al, Here comes the sun: music features of popular songs reflect prevailing weather conditions, Royal Society Open Science (2023). DOI: 10.1098/rsos.221443
Research shows why our taste in music can't be siloed into catch-all genres
Jun 2023, phys.org
That one trick: "The pop music people liked best was from the decade during which they were around 20 years old." (Learned this from a wedding DJ, many years ago.)
Also: Max Planck Institute for Empirical Aesthetics (exists).
Other results: "By systematically recording liking at genre and sub-genre levels, the researchers obtained a more differentiated picture of musical taste. ... Across all genres, subtypes that represented the mainstream variant were generally preferred over more challenging alternatives."
via Max Planck Institute for Empirical Aesthetics: ou don't know a person('s taste) when you only know which genre they like: Taste differences within five popular music genres based on sub-genres and sub-styles, Frontiers in Psychology (2023). DOI: 10.3389/fpsyg.2023.1062146
Brain2Music taps thoughts to reproduce music
Jul 2023, phys.org
https://techxplore.com/news/2023-07-brain2music-thoughts-music.html
Music samples covering 10 genres including rock, classical, metal, hip-hop, pop and jazz were played for five subjects while researchers observed their brain activity under functional MRI (fMRI). This data trained a model to match the music and the brain waves. But then, another machine, this one trained on music-text data to generate music from text prompts, is combined with the brains-on-music model. The two models mix and match so that "the generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation and mood".
In other words, with your own machine like this, you would be able to make music just by thinking about it.
Also noted: despite advances in text-to-music models, "their internal processes are still poorly understood."
via Google and Osaka University: Timo I. Denk et al, Brain2Music: Reconstructing Music from Human Brain Activity, arXiv (2023). DOI: 10.48550/arxiv.2307.11078
Brain recordings capture musicality of speech, with help from Pink Floyd
Aug 2023, phys.org
The phrase "All in all it was just a brick in the wall" comes through recognizably in the reconstructed song, its rhythms intact, and the words muddy, but decipherable. This is the first time researchers have reconstructed a recognizable song from brain recordings of 29 patients. [you can actually hear the re-construction at the link above]. From some of the people who brought the first reconstruction of the words a person was hearing from recordings of brain activity alone (2012).
Also about left brain vs right brain, language vs music, pinpoint vs field action --
Bellier emphasized that the study, which used artificial intelligence to decode brain activity and then encode a reproduction, did not merely create a black box to synthesize speech. He and his colleagues were also able to pinpoint new areas of the brain involved in detecting rhythm, such as a thrumming guitar, and discovered that some portions of the auditory cortex -- in the superior temporal gyrus, located just behind and above the ear -- respond at the onset of a voice or a synthesizer, while other areas respond to sustained vocals."Language is more left brain. Music is more distributed, with a bias toward right," Knight said."It wasn't clear it would be the same with musical stimuli," Bellier said. "So here we confirm that that's not just a speech-specific thing, but that's it's more fundamental to the auditory system and the way it processes both speech and music."
via University of California Berkeley, Albany Medical Center: Music can be reconstructed from human auditory cortex activity using nonlinear decoding models, PLoS Biology (2023). DOI: 10.1371/journal.pbio.3002176
Image credit: Crystalline Density Wave illustration - Harald Ritsch Innsbruck University EPFL - 2023
What's love got to do with it? An exception to the recognition of musical themes
Sep 2023, phys.org
(I am so confused by this study)
Researchers played 14-second snippets of vocals in 31 languages from a bank of songs that originated from a host of cultures to more than 5,000 people from 49 countries. The research team included subjects not only from the industrialized world, but more than 100 individuals who live in three small, relatively isolated groups of no more than 100.They then asked the listeners to rank the likelihood of each sample as being one of four music types: dance, lullabies, "healing" music, or love music. People from all cultures could easily identify all except love songs -- only 12 of 28 groups could recognize love songs.
(Is it the point of the study to prove that a love song is determined by its lyrics rather than by its musical style? Confused.)
"One reason for this could be that love songs may be a particularly fuzzy category that includes songs that express happiness and attraction, but also sadness and jealousy," said lead author Lidya Yurdum, who works as research assistant at the Yale Child Study Center and is also a graduate student at the University of Amsterdam. "Listeners who heard love songs from neighboring countries and in languages related to their own actually did a little better, likely because of the familiar linguistic and cultural clues."But other than love songs, the authors discovered, the listeners' "ratings were largely accurate, consistent with one another, and not explained by their linguistic or geographical proximity to the singer—showing that musical diversity is underlain by universal psychological phenomena."
via Yale: Lidya Yurdum et al, Universal interpretations of vocal music, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2218593120
Research demonstrates the power of rhythm as a design element in evolution and robotics
Oct 2023, phys.org
Varying the rhythm of movement -- the music of how the pieces move together over time -- is a new way of designing robot movement. For example, the breaststroke is characterized by three time-intervals: a slow period of reaching forward, a fast period of pushing backward and a static period of coasting. For optimum performance, the lengths of time for those intervals typically go long, fast, long. But in certain situations—outracing or outmaneuvering a predator, for example—the ratios of those periods change drastically.The work builds on research Bejan published nearly 20 years ago, where he demonstrated that size and speed go hand-in-hand across the entire animal kingdom whether on land, in the air or under water. The physics underlying that work dealt with weight falling forward from a given animal's height over and over again. In this paper, Bejan shows that his previous work was incomplete, and that all animals, robots and other moving things can further optimize their mechanics by adding an element of rhythm.It is yet another example of how good design—whether made by humans or through natural evolution—is truly a form of art."
via Duke University: A. Bejan et al, Locomotion rhythm makes power and speed, Scientific Reports (2023). DOI: 10.1038/s41598-023-41023-6