This post is about how science works, and how it doesn't.
Science is hard work, and requires lots of money, most of which comes from public funds and school tuitions. But some of it comes from people-like entities called corporations. Sometimes it's hard to tell what money goes where and from who, and we want to know because those who fund science are ultimately creating our reality.
Infiltrating academia is the most surreptitious, subversive, insidious (and let's face it - effective) way to control the minds of a population, every big industry (Big Agra, Big Rubber, Big Cheese, Big PFAS?) will spend more money manipulating reality by way of academic scientific pursuits than making their own products and services.
This first article is from the Barabasi labs and uses network science, so the way they do their study is interesting:
Study reveals complex dynamics of philanthropic funding for US science
Jun 2024, phys.org
The IRS in recent years has made the tax form that nonprofits must file disclosing their revenue, expenditures, and other organizational information machine readable. Researchers then analyzed more than 3.6 million tax records filed by approximately 685,000 universities and research institutions between 2010 and 2019.
"Some philanthropists make it very explicit that they give to their local communities. The Gates Foundation's biggest donation was to the University of Washington; they favor things in Seattle much more than they declared."
The authors also found that the amount of philanthropic dollars institutions receive is highly correlated to the degree of support provided by the National Science Foundation.
Additionally, private donors and nonprofits tend to support the same organizations over time, the analysis showed. with an 80% chance that a donor who gave to an organization two years in a row would support it the following year; for funding relationships that had lasted seven years, the probability is 90%.
Such a tool could enhance the public's understanding of the impact of philanthropy on science and help researchers gain access and awareness of the philanthropic options that could advance their work.
via Virginia University and Albert-László Barabási at Northeastern University: Louis M. Shekhtman et al, Mapping philanthropic support of science, Scientific Reports (2024). DOI: 10.1038/s41598-024-58367-2
Next - Science depends on a written record of experiments and results. Maintaining the record of science, i.e., scientific journals, is done almost exclusively by the private industry. Sometimes, both the scientsits and the publishers have an incentive to NOT realize they're doing something wrong (Upton Sinclair: "It is difficult to get a man to understand something when his salary depends on his not understanding it").
Here's a story about how bad science happens, and what it can do to the rest of us. This is about retractions:
University of Minnesota retracts pioneering studies in stem cells, Alzheimer's disease
Jun 2024, phys.org
Dr. Karen Ashe and colleagues gained global attention in 2006 when they found amyloid beta star 56 as a molecular target in the onset of Alzheimer's disease.
Colleagues at other institutions struggled to replicate their findings, which prompted others to look closer at the images of cellular or molecular activity in mice on which their findings were based.
Verfaillie and colleagues corrected the Nature paper in 2007, which contained an image of cellular activity in mice that appeared identical to an image in a different paper that supposedly came from different mice. The U then launched an investigation over complaints of image duplications or manipulations in more of Verfaillie's papers.
It eventually cleared her of misconduct, but blamed her for inadequate training and oversight and claimed that a junior researcher had falsified data in a similar study published in the journal Blood.
The journal Nature stated that the paper contained "excessive manipulation, including splicing, duplication and the use of an eraser tool" to edit the images.
(This is almost 20 years later, and after lots of people invested lots of money in chasing this result.)
via The Star Tribune:
Sylvain Lesné et al, RETRACTED ARTICLE: A specific amyloid-β protein assembly in the brain impairs memory, Nature (2006). DOI: 10.1038/nature04533
Yuehua Jiang et al, RETRACTED ARTICLE: Pluripotency of mesenchymal stem cells derived from adult marrow, Nature (2002). DOI: 10.1038/nature00870
Less sensational, or perhaps more sensational, is this absolute bomb - dropped on all of us who've been obediently following the one-drink-a-day advice for about one generation since it first came out.
The worldwide public health community has been scratching its head over this since it emerged from the data, many years ago. People who drink once a day seem to live longer than people who drink none. Therefore, one drink a day must be good for you! Hmmm. Turns out the only people in a large population who we can get to serve as a "normal healthy person who doesn't ever drink" is a person who's suffering from former substance abuse, and abstaining from alcohol not for any other reason than the fact that it's going to ruin their life. And those people have a built-in health burden that makes them a bad reference point for a "normal healthy person", and then makes all the rest of us look less healthy when compared to them. It's called the Former Drinker Paradox, or a number of other names, and it's likely going to be the canonical case study in public health research courses for decades to come.
This is a story about the scientific method, study design, and the need to understand how large numbers work when mashed together:
Study debunks link between moderate drinking and longer life
Jul 2024, phys.org
Reminder - "lower quality" studies, with older participants, no distinction between former drinkers and lifelong abstainers, linked moderate drinking to greater longevity. So moderate drinkers were compared with "abstainer" and "occasional drinker" groups that included some older adults who had quit or cut down on drinking because they'd developed any number of health conditions. "That makes people who continue to drink look much healthier by comparison."
"If you look at the weakest studies," Stockwell said, "that's where you see health benefits."
Yes, the weak studies.
Further reading: Stockwell, T., et al. Why do only some cohort studies find health benefits from low volume alcohol use? A systematic review and meta-analysis of study characteristics that may bias mortality risk estimates. Journal of Studies on Alcohol and Drugs (2024). DOI: 10.15288/jsad.23-00283.
Next - The scientists who make the content in the journals and the people who run the publishing industry are not the same people, yet they both seem to be having a hard time resisting the temptation to use robots:
Flood of 'junk': How AI is changing scientific publishing
Aug 2024, phys.org
A bioinformatics professor at Brigham Young University in the United States told AFP that he had been asked to peer review the study in March.
After realizing it was "100 percent plagiarism" of his own study - but with the text seemingly rephrased by an AI program - he rejected the paper.
He said he was "shocked" to find the plagiarized work had simply been published elsewhere, in a new Wiley journal called Proteomics.
More than 13,000 papers were retracted last year, by far the most in history, according to the US-based group Retraction Watch. The paper in question in this writeup, however, has not yet been retracted.
Note: usually I try to add the academic paper here at the bottom, and which is usually taken from the bottom of the writeup, but in this case I think there is no paper, and the bottom of the writeup points to ... the journal that reprinted the obviously fake paper. Proteomics. Remember the name. But also remember that the science aggregator website (phys.org) probably uses some level of automation (remember when we used to call AI simply "automation"?) to place the article information at the bottom of the writeup, explaining how this happened here. This is the future.
Paper mills: The 'cartel-like' companies behind fraudulent scientific journals
Oct 2024, phys.org via Rizqy Amelia Zein for The Conversation
In just five years, the numbers of retractions jumped from 10 in 2019 to 2,099 in 2023. [
link]
Paper Mills - By paying around €180 to €5,000 (approximately US$197–$5,472), a person can have their name listed as the author of research paper, without having to painstakingly do research and write the results.
And this is how:
- plagiarize other published articles
- contain false and stolen data
- include engineered and duplicated images
- rewrite scientific articles using generative artificial intelligence
- translate published articles from other languages into English
- sell authorship slots before an article is accepted, guaranteed to publish
- offer fake peer review services to convince potential buyers
- bribing rogue journal editors with as much as $20,000 [link]
- unusual collaboration patterns: An article on the activity of ground beetles attacking crops in Kazakhstan, for example, is written by authors who are neither affiliated with institutions in Kazakhstan nor experts in insects or agriculture. The authors' backgrounds are suspiciously heterogeneous, ranging from anesthesia, dentistry, to biomedical engineering.
Journals rarely state outright that a retraction is due to paper mill fraud, so Retraction Watch data as of May 2024 only recorded 7,275 retractions of articles related to the paper mill out of a total of 44,000 retractions recorded. In fact, it is estimated that up to 400,000 paper mill articles have infiltrated scientific literature over the past two decades.
via The Conversation under Creative Commons license
Even the survey participants themselves can't resist!
Survey participants are turning to AI, putting academic research results into question
Nov 2024, phys.org
"AI use has probably caused scholars and researchers and editors to pay increased scrutiny to the quality of their data."
The authors surveyed about 800 participants on Prolific (like Mechanical Turk) to learn how they engage with LLMs. All had taken surveys on Prolific at least once; 40% had taken seven surveys or more in the last 24 hours.
The authors also noted that these responses included more "dehumanizing" language when describing Black Americans, Democrats, and Republicans. In contrast, LLMs consistently used more neutral, abstract language, suggesting that they may approach race, politics, and other sensitive topics with more detachment.
Participants who were newer to Prolific or identified as male, Black, Republican, or college-educated, were more likely to say they'd used AI writing assistance.
Societal inflection point: To see how human-crafted answers differ from AI-generated ones, the authors looked at data from three studies fielded on gold-standard samples before the public release of ChatGPT in November 2022.
via Stanford Graduate School of Business, New York University and Cornell: Simone Zhang et al, Generative AI Meets Open-Ended Survey Responses: Participant Use of AI and Homogenization, SocArXiv (2024). DOI: 10.31235/osf.io/4esdp
Bonus Reminder:
Ensuring Free, Immediate, and Equitable Access to Federally Funded Research
This memorandum provides policy guidance to federal agencies with research and development
expenditures on updating their public access policies. In accordance with this memorandum,
OSTP recommends that federal agencies, to the extent consistent with applicable law:
- Update their public access policies as soon as possible, and no later than December 31st, 2025, to make publications and their supporting data resulting from federally funded research publicly accessible without an embargo on their free and public release;
- Establish transparent procedures that ensure scientific and research integrity is maintained in public access policies; and,
- Coordinate with OSTP to ensure equitable delivery of federally funded research results and data. Aug 25 2022.