Singing researchers find cross-cultural patterns in music and language
May 2024, phys.org
An international team of researchers recorded themselves performing traditional music and speaking in their native language. In all 50+ languages, the rhythms of songs and instrumental melodies were slower than those of speech, while the pitches were higher and more stable.
Speculating on underlying reasons for the cross-cultural similarities, Savage suggests songs are more predictably regular than speech because they are used to facilitate synchronization and social bonding.
"Slow, regular, predictable melodies make it easier for us to sing together in large groups," he says. "We're trying to shed light on the cultural and biological evolution of two systems that make us human: music and language."
via Max Planck Institute for Psycholinguistics in Nijmegen: Yuto Ozaki et al, Globally, songs and instrumental melodies are slower, higher, and use more stable pitches than speech: A Registered Report, Science Advances (2024). DOI: 10.1126/sciadv.adm9797.
Neuroplasticity study shows how singing rehabilitates speech production in post-stroke aphasia
May 2024, phys.org
According to the findings, singing repairs the structural language network of the brain. The language network processes language and speech in our brain. In patients with aphasia, the network has been damaged.
"For the first time, our findings demonstrate that the rehabilitation of patients with aphasia through singing is based on neuroplasticity changes"
via University of Helsinki: Aleksi J. Sihvonen et al, Structural Neuroplasticity Effects of Singing in Chronic Aphasia, eNeuro (2024). DOI: 10.1523/ENEURO.0408-23.2024
Study shows the power of social connections to predict hit songs
Jun 2024, phys.org
The team analyzed data from last.fm, analyzing 2.7 million users, 10 million songs, and 300 million plays, along with networks of mapping friendships and another capturing influence dynamics (who listens to a song and who follows suit).
Examining the first 200 plays of a new song, they predicted its chances of becoming a hit.
The researchers improved the precision of predicting hit songs from 14% to 21%.
Existing models often focus on artist fame and listening metrics, but the CSH study highlights the overlooked social aspect - musical homophily, which is the tendency for friends to listen to similar music.
via Complexity Science Hub Vienna: Niklas Reisz et al, Quantifying the impact of homophily and influencer networks on song popularity prediction, Scientific Reports (2024). DOI: 10.1038/s41598-024-58969-w
Understanding the synchronization of physiological states during a live music performance
Jul 2024, phys.org
"Music-induced synchronization of heart rate may be the mechanism underlying the coherent behavior of a large audience in a theater."
Synchronization of physical and cognitive processes is better within an individual compared to that between different individuals; and heart rate synchronization in response to music depends on the reliable physiological responses of the listener, not on their mood or music preferences.
Note that in the study, to guage mood, they had participants in the study listen to the same piece of music on different days. And to quantify the influence of music preference on heart rate synchronization, he investigated whether a person listening to a piece of music selected by the researcher differed in synchronization response to a piece of music that deeply moves them when played in a randomized order.
Why - "From data on small audiences, for example, the degree of proficiency of performers, commercial success can be predicted in terms of reliability."
via Waseda University: Ryota Nomura, Reliability for music-induced heart rate synchronization, Scientific Reports (2024). DOI: 10.1038/s41598-024-62994-0
Classical music lifts our mood by synchronizing our 'extended amygdala'
Aug 2024, phys.org
The study focused on 13 patients with treatment-resistant depression who already had electrodes implanted in their brains for the purpose of deep-brain stimulation. These implants are placed in a circuit connecting two areas in the forebrain—the bed nucleus of the stria terminalis (BNST) and the nucleus accumbens (NAc). Using these implants, the researchers found that music generates its antidepressant effects by synchronizing the neural oscillations between the auditory cortex, which is responsible for processing of sensory information, and the rewards circuit, which is responsible for processing emotional information.
The patients in the study were assigned to two groups: low music appreciation or high music appreciation. Those in the high music appreciation group demonstrated more significant neural synchronization and better antidepressant effects, while those in the low music appreciation group showed poorer results.
via Center for Functional Neurosurgery at Shanghai Jiao Tong University: uditory entrainment coordinates cortical-BNST-NAc triple time locking to alleviate the depressive disorder, Cell Reports (2024). DOI: 10.1016/j.celrep.2024.114474.
Intermission on sonic superpowers:
Singing from memory unlocks a surprisingly common musical superpower
Aug 2024, phys.org
They asked people to sing out any earworms they were experiencing and record them on their phones when prompted at random times throughout the day, and found the majority perfectly matched the pitch of the original songs.
"Surprisingly large portion of the population has a type of automatic, hidden 'perfect pitch' ability."
They think it depends on whether the music was recalled deliberately or as a result of an "earworm", which is why they asked people to record themselves as soon as they hear one in their head. So now they think there may be something unique about musical memories and the ways they are encoded and maintained inside our brains.
via University of California Santa Cruz: Matthew G. Evans et al, Absolute pitch in involuntary musical imagery, Attention, Perception, & Psychophysics (2024). DOI: 10.3758/s13414-024-02936-0
Bach, Mozart or jazz: Scientists provide a quantitative measure of variability in music pieces
Nov 2024, phys.org
They even mention Meyer in the write up: "According to Meyer, emotions and meaning in music arise from the interplay of expectations and their fulfillment or (temporary) non-fulfillment."
Sorry I have to copy mostly the whole thing, because this stuff is complicated:
They used time series analysis to infer how similar a tone sequence is to previous sequences (the autocorrelation function of musical pitch sequences).
They analyzed more than 450 jazz improvisations and 99 classical compositions, and found the autocorrelation function of pitches initially decreases slowly with the time difference. This expresses a high similarity and possibility to anticipate musical sequences.
However, they found that there is a time limit, after which this similarity and predictability ends relatively abruptly. For larger time differences, the autocorrelation function and memory are both negligible.
Of particular interest here are the values of the transition times of the pieces where the more predictable behavior changes into a completely unpredictable and uncorrelated behavior. Depending on the composition or improvisation, the scientists found transition times ranging from a few quarter notes to about 100 quarter notes. Jazz improvisations typically had shorter transition times than many classical compositions, and therefore were usually less predictable.
Differences could also be observed between different composers. For example, the researchers found transition times between five and twelve quarter notes in various compositions by Johann Sebastian Bach, while the transition times in various compositions by Mozart ranged from eight to 22 quarter notes. This implies that the anticipation and expectation of the musical progression tends to last longer in Mozart's compositions than in Bach's compositions, which offer more variability and surprises.
via Max Planck Institute for Dynamics and Self-Organization and the University of Göttingen: Corentin Nelias et al, Stochastic properties of musical time series, Nature Communications (2024). DOI: 10.1038/s41467-024-53155-y
Study explores how brain waves reflect melody predictions while listening to music
Nov 2024, phys.org
We aimed to disentangle the frequency-specific neural dynamics linked to melodic prediction uncertainty (modeled as entropy) and prediction error (modeled as surprisal) for temporal (note onset) and content (note pitch) information."
They recruited 20 participants, half of whom were professional pianists, to listen to 10 piano melodies extracted from the work of Johann Sebastian Bach, each lasting approximately 150 seconds, and while being recorded using electroencephalography (EEG).
"An analysis of the temporal response function (TRF) weights revealed that the temporal predictability of a note (entropy of note onset) may be encoded in the delta- (1–4 Hz) and beta-band (12–30 Hz) brain activity prior to the stimulus, suggesting that these frequency bands associate with temporal predictions."
via Max Planck Institute for Human Cognitive and Brain Sciences: Juan-Daniel Galeano-Otálvaro et al, Neural encoding of melodic expectations in music across EEG frequency bands. European Journal of Neuroscience(2024). DOI: 10.1111/ejn.16581
Further Reading for Meyer-like things: