Strategic Initiatives
11140 stories
·
45 followers

Taking Sides: Wikipedia Advances Anti-Israel Narratives | RealClearInvestigations

1 Share
  • Concern: A RealClearInvestigations article discusses claims of anti-Israel bias within Wikipedia regarding sources and content related to the Israeli-Palestinian conflict.

  • Criticism: The article cites criticism from multiple sources, including a bipartisan group of U.S. Congress members, regarding alleged bias and the favoring of certain perspectives.

  • Examples: Specific examples of biased entries are provided, such as the 'Gaza genocide' entry and the use of sources like Amnesty International and Rashid Khalidi.

  • Sources: The article highlights the problematic use of specific sources, including academics and NGOs, and the exclusion of others, pointing out the potential for skewed information.

  • Wikipedia's Response: The article notes Wikipedia's claims of neutrality and its reliance on a consensus model, while also pointing out criticisms about the model's effectiveness in maintaining unbiased content.


Wikipedia, the world’s go-to site for information that professes to take a neutral point of view, is coming under fire for alleged anti-Israel bias in the sources it favors and content it delivers to millions of readers. 
The criticism is coming from several quarters, including a bipartisan group of 23 members of Congress who, in an April letter, expressed “deep concern regarding antisemitism” found in the online encyclopedia. The entries routinely highlight the work of anti-Zionist scholars and Non-Governmental Organizations (NGOs), according to a review by RealClearInvestigations, while dismissing the views of Israel’s defenders. Amnesty International, which casts Israel as genocidal, is considered a reliable source for the Israeli-Palestinian conflict, while the Anti-Defamation League, which rejects that view, is not. 
FR159526 AP
A vigil for victims of antisemitic violence, Yaron Lischinsky and Sarah Pilgrim, gunned down last month in Washington, D.C.
AP
The controversy has emerged during a sharp rise in antisemitism around the world, including the recent murders of two Israeli embassy staffers in Washington, D.C., and the firebombing in Boulder of protesters demanding the release of hostages taken by Hamas. Critics argue that the online encyclopedia is fueling this hatred by publishing biased entries that are presented as objective statements of fact. 
Wikipedia is produced by volunteer editors who are instructed to follow a set of rules as they summarize the work of authoritative sources, which can include those that appear to be biased. Its consensus model encourages editors to work out their differences collegially and reach a compromise that balances the different viewpoints of sources to ensure neutrality. But critics say that so many academics and NGOs hold left-leaning views that cast Israel as the oppressor and Palestinians as the oppressed that it is hard for editors to avoid publishing biased statements as neutral ones. 
Consider Wiki’s entry for “Gaza genocide” – a title that, critics argue, takes sides. It begins with this statement: “According to a United Nations Special CommitteeAmnesty International, and other experts and human rights organizations, Israel is committing genocide against the Palestinian people during its ongoing invasion and bombing of the Gaza Strip as part of the Gaza war.” The entry then lists several paragraphs of evidence, including large-scale deaths of Palestinians, the forced displacement of most of the population, and starvation. 
Where’s the other side of the story to establish neutrality? Not until the seventh paragraph do readers learn that Hamas’ attack in Israel, killing 1,139 people, sparked the invasion of Gaza. But rather than calling Hamas a terrorist group – a classification used by the U.S., EU, U.K., Canada, and other democratic nations – whose avowed goal is the destruction of Israel, the entry describes the Oct. 7 attack by Hamas as a response to Israel’s historic treatment of Palestinians. 
Leading With Bias 
Critics say that, as in the case of the “Gaza genocide,” bias is often revealed in the opening lines of entries. This can skew readers’ understanding because, as Wikipedia reports, about 60% of people don’t scroll past the lead. In the Hamas entry, readers would have missed the terrorist designation of the organization that controls Gaza. It appears in the very last line of the opening section. 
Wikipedia
Wikipedia claims to use a "consensus model" that balances different viewpoints to ensure neutrality.
Wikipedia
“Fundamentally the policy for reliability is based on the views of editors, and not more rigorous metrics,” explained a Wikipedia editor who says they have edited hundreds of articles. “There is also a conflict between reliability and the ideas of [the] ideological fringe.” “Pro-Hamas editors,” added this editor, who insisted on anonymity, selectively choose “their preferred academic sources.”
Wikipedia’s impact is amplified because its entries are usually high on the list of links offered by top search engines Google and Bing, which often reprint the first few lines in their query responses. A Wikipedia article is dedicated to the longstanding relationship between Google and Wikipedia, discussing how Google utilizes Wikipedia to combat misinformation on YouTube and how Google has made donations to the Wikimedia Foundation, the San Francisco-based nonprofit that oversees Wikipedia. 
The Wikimedia Foundation, which manages the site, did not respond to RCI’s repeated requests for comment. In March,  a spokesperson said, “The Foundation takes seriously allegations of bias on Wikipedia.  “We categorically condemn antisemitism and all forms of hate.  
Katherine Maher, who led the Foundation from 2016-2021 and is now CEO of National Public Radio, recently walked back controversial statements, including that America was “addicted to white supremacy” and that “our reverence for the truth might become, might have become, a bit of a distraction.”
Questionable Sources
The plethora of anti-Israel academics makes it easy to present anti-Israel narratives under the guise of neutrality. The entry for the Nakba – Arabic for catastrophe – describes the 1948 war after the UN created the state of Israel as “the ethnic cleansing of Palestinians in Mandatory Palestine during the 1948 Palestine war.” This echoes the claim in the Zionism entry that “Zionists wanted to create a Jewish state in Palestine with as much land, as many Jews, and as few Palestinian Arabs as possible.” Neither article presented balancing perspectives in the all-important top parts of the lead sections. 
Two of the sources for these claims are Columbia University Professor Emeritus Rashid Khalidi and University of Exeter Professor Ilan Pappé. Both professors are seen by critics as anti-Israel activists. 
Wikipedia
Despite his reported ties to the PLO, Columbia University Professor Rashid Khalidi's views are considered mainstream by Wikipedia. 
Wikipedia
Several newspapers have reported that Khalidi was associated with the Palestine Liberation Organization (PLO) when he was in Beirut in 1976-83; at the time, the PLO was widely considered to be a terrorist organization. In 1976,  the Los Angeles Times described Khalidi as “a PLO spokesman.” In 1978, the New York Times said he “works for the PLO” (the article misspells his name as “Khalidy”), and in 1979 the paper re-reported that he was “close to Al Fatah” (which controls the PLO). A 1981 Christian Science Monitor article refers to Khalidi as having “good access to PLO leadership.” None of the papers has corrected these statements. 
Wikipedia’s entry on Khalidi includes his reported links to the PLO and his denial of being a spokesman for the group, saying in 2004 that he “often spoke to journalists in Beirut, who usually cited me without attribution as a well-informed Palestinian source. If some misidentified me at the time, I am not aware of it.” He also said that he didn’t have much time for anything outside of his academic work, writing, and research while he was in Beirut in that timeframe.
Asaf Romirowsky, a historian who heads Scholars for Peace in the Middle East and the Association for the Study of the Middle East and North Africa, has tracked Khalidi and Pappé’s work. A defender of Israel, he reports that Khalidi has at various times described Israel as a “racist state” and an “apartheid system in creation.” Romirowsky told RCI that Khalidi supported Columbia’s anti-Israel encampment in the spring of 2024 and has become increasingly anti-American and anti-Western in his books.
Whatever the truth, critics say enough questions surround Khalidi’s past that it is hard to view him as a neutral source. Khalidi did not respond to RCI’s requests for comment.
Complicated History
Pappé belongs to a group of Israeli historians who challenged Israel’s version of the 1948 war. In June 2024, Pappé said that the “hope for me is the end of Israel and a creation of a free Palestine from the river to the sea.”
According to Romirowsky, Pappé’s 2006 book “The Ethnic Cleansing of Palestine” – which Wikipedia cites in both the “as few Palestinian Arabs as possible” and “ethnic cleansing” lines – is premised on the belief that Israel had a “master plan” in 1948 to eradicate the Arab Palestinian community and that its policies are still motivated by an ethnic cleansing agenda. Romirowsky disputes that narrative, calling such claims “sensationalist.”  
“Don’t get me wrong: wars are difficult,” Romirowsky said. “The history is not black and white, there were a lot of mistakes made in ’48, but … [historians like Pappé] have selectively chosen quotes without giving context as to the position of the Jewish Agency – the Yishuv – at the time as it comes to the Arab Palestinian population.” 
Wikipedia
Wikipedia considers the veiws of  Ilan Pappé, a historian who said he hopes for "an end to Israel," mainstream.
Wikipedia
Pappé did not respond to RCI’s requests for comment.
A second Wikipedia editor, self-described as being disillusioned after making thousands of edits, told RCI anonymously that “it’s at best naive to use sources everyone knows are biased” to make statements “in the encyclopedia's neutral voice,” and such sources should always be properly attributed. 
The first editor informed RCI that while sources like Khalidi and Pappé can be used, “it should be noted that not everyone agrees with them” and should be balanced with academics who hold opposing views.
Comparing Hamas and Likud
Wikipedia’s page on Hamas similarly draws upon what critics see as partisan sources. The article compares the Hamas charter to the platform of Likud, a right-wing Israeli political party, stating: “Many scholars have pointed out that both the 1988 Hamas charter and the Likud party platform sought full control of the land, thus denouncing the two-state solution.” These “many scholars,” according to the authorities cited, include the journalist Peter Beinart and politically active linguist Noam Chomsky, both of whom are well-known left-wing critics of Israel. 
It does not note that other scholars reject this equivalence. Romirowsky said that “there is nothing in the Likud party platform that talks about the eradication of the Arab Palestinian population,” while the Hamas charter calls for the eradication of Jews and Zionists, and the charter uses “Jews” and “Zionists” interchangeably.
The first editor told RCI anonymously that the wording of the Hamas-Likud comparison violates the site’s policy. “Unless you have an academic review article that says that academic consensus is x, y, and z, you can’t write, ‘many scholars think x, y, and z’ and then cite it to your own cherry-picked list of scholars,” the editor said. 
Double Standards
Pro-Israel academics are cited on Wikipedia, but they often face hurdles. The 2018 book “The Zionist Ideas” by McGill University professor Gil Troy - who identifies himself as “American Historian, Zionist Thinker” on his personal website – was rejected from inclusion on an editor-compiled list of “Best Sources” providing an overview of Zionism in Wikipedia. The stated reasoning: Troy is an American presidential historian whose book is better suited for discussing different types of Zionism. 
Northeastern University
Wikipedia deems Shahid Alam, a professor who advises Northeastern University's chapter of Students for Justice in Palestine, a "best source." 
Northeastern University
By contrast, Shahid Alam, an emeritus professor of economics at Northeastern University who serves as faculty advisor for the school’s chapter of Students for Justice in Palestine, was included in the “Best Sources” list and is cited twice in the Zionism page, including in the opening section. Alam wrote a controversial column for CounterPunch in 2004, drawing parallels between the American revolutionaries and the 9/11 hijackers in that “both insurgencies seek to overthrow what they perceive to be foreign occupations,” though he acknowledged that the colonists did not target civilians and the 9/11 hijackers did. 
Wikipedia editors also rely on NGOs that appear to be biased in some reports as neutral observers. The lead of the “Use of human shields by Hamas” page states that Amnesty International found no evidence to support Israel’s claims that Hamas used human shields in the 2008-2009 and 2014 Gaza Wars, while Human Rights Watch found no evidence that Hamas used human shields in the 2008-2009 war.
Max Abrahms, a political science professor at Northeastern University and terrorism expert, found the groups’ claims that Hamas did not use human shields during those prior wars “laughable.” He said that there is photo evidence to the contrary, and even the Palestinian Authority has said that Hamas uses human shields. 
While Amnesty International and Human Rights Watch are considered reliable sources, some organizations more sympathetic to Israel are not. The Heritage Foundation was recently blacklisted on Wikipedia (meaning its URL is blocked from the online encyclopedia) following a report from The Forward that Heritage had proposed unmasking antisemitic Wikipedia editors. 
The Anti-Defamation League (ADL) has also been downgraded to being generally unreliable on all topics related to the Israeli-Palestinian conflict, Israel, or Zionism (although the ADL’s hate symbol database is considered reliable). The downgrade occurred following a formal discussion among editors last year in which they stated their position on the reliability of ADL; three uninvolved Wikipedians in good standing then “closed” the discussion and rendered a verdict based on the numbers and strength of the arguments. In this discussion, editors cited critics of Israel like The Nation, The Guardian, and Jewish Currents as evidence that the ADL conflates criticism of the Israeli government with antisemitism and thus its reliability should be downgraded. 
One source that is generally considered reliable on Wikipedia is the Southern Poverty Law Center (SPLC), which has been criticized for casting mainstream conservatives as extremists. On May 24, a formal discussion was launched reexamining the SPLC’s reliability on Wikipedia, which remains ongoing as of publication time.
Although Wikipedia credits its volunteer editing for making it a vast encyclopedia with tens of millions of pages, it lacks the policies and tools to achieve neutrality on hotly contested topics like the Israel-Palestine conflict, according to a third editor who requested anonymity. “The system is remedial at best: if it is from a plausibly reliable academic source, publication, institution, or individual, it is usually allowed without question, and is incredibly hard to contest,” a longtime editor said. “Just because a person is staffed at a university or is publishing work in a peer-reviewed journal does not mean that it should be considered by default a reliable or neutral point-of-view source.”
We're proud to make our journalism accessible to everyone, but producing high-quality investigative pieces still comes at a cost.
That's why we need your help. By making a contribution today, you'll be supporting RealClearInvestigations and ensuring that we can keep providing in-depth reporting that holds the powerful accountable.
Donate now and help us continue to publish distinctive journalism that makes a difference. Thank you for your support!
Support RealClearInvestigations →
Read the whole story
bogorad
2 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

How Did Humans Evolve Such Rotten Genetics? | RealClearScience

1 Share
  • Humans' genetics: Exhibit a high rate of chromosomal abnormalities and harmful mutations.

  • The author's book: Suggests that small human population sizes during feature evolution and placental feeding contribute to this state.

  • Reproductive risks: Human reproduction is inherently risky, with a significant rate of miscarriages and chromosomal issues in embryos.

  • Placenta & Gestational Issues: Problems like gestational diabetes and pre-eclampsia are linked to the placenta and are human-specific.

  • Evolution & Mutations: The nearly-neutral theory is used to explain these issues as a consequence of human population size during evolution.


To Shakespeare’s Hamlet we humans are “the paragon of animals”. But recent advances in genetics are suggesting that humans are far from being evolution’s greatest achievement.
For example, humans have an exceptionally high proportion of fertilised eggs that have the wrong number of chromosomes and one of the highest rates of harmful genetic mutation.
In my new book The Evolution of Imperfection I suggest that two features of our biology explain why our genetics are in such a poor state. First, we evolved a lot of our human features when our populations were small and second, we feed our young across a placenta.

Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

Our reproduction is notoriously risky for both mother and embryo. For every child born another two fertilised eggs never made it.
Most human early embryos have chromosomal problems. For older mothers, these embryos tend to have too many or too few chromosomes due to problems in the process of making eggs with just one copy of each chromosome. Most chromosomally abnormal embryos don’t make it to week six so are never a recognised pregnancy.
About 15% of recognised pregnancies spontaneously miscarry, usually before week 12, rising to 65% in women over 40. About half of miscarriages are because of chromosomal issues.
Other mammals have similar chromosome-number problems but with an error rate of about 1% per chromosome. Cows should have 30 chromosomes in sperm or egg but about 30% of their fertilised eggs have odd chromosome numbers.
Humans with 23 chromosomes should have about 23% of fertilised eggs with the wrong number of chromosomes but our rate is higher in part because we presently reproduce late and chromosomal errors escalate with maternal age.
Survive that, then gestational diabetes and high blood pressures issues await, most notably pre-eclampsia, potentially lethal to mother and child, affecting about 5% of pregnancies. It is unique to humans.
Historically, up until about 1800, childbirth was remarkably dangerous with about 1% maternal mortality risk, largely owing to pre-eclampsia, bleeding and infection. In Japanese macaques by contrast, despite offspring also having a large head, maternal mortality isn’t seen. Advances in maternal care have seen current UK maternal mortality rates plummet to 0.01%.
Many of these problems are contingent on the placenta. Compare us to a kiwi bird that loads its large egg with resources and sits on it, even if it is dead: time and energy wasted. In mammals, if the embryo is not viable, the mother may not even know she had conceived.
The high rate of chromosomal issues in our early embryos is a mammalian trait connected to the fact that early termination of a pregnancy lessens the costs, meaning less time wasted holding onto a dead embryo and not giving up the resources that are needed for a viable embryo to grow into a baby.
But reduced costs are not enough to explain why chromosomal problems are so common in mammals.
During the process of making a fertilisable egg with one copy of each chromosome, a sister cell is produced, called the polar body. It’s there to discard half of the chromosomes. It can “pay” in evolutionary terms for a chromosome to not go to the polar body when it should instead stay behind in the soon to be fertilised egg.
It forces redirection of resources to viable offspring. This can explain why chromosomal errors are mostly maternal and why, given their lack of ability to redirect saved energy, other vertebrates don’t seem to have embryonic chromosome problems.
Our problems with gestational diabetes are a consequence of foetuses releasing chemicals from the placenta into the mother’s blood to keep glucose available. The problems with pre-eclampsia are associated with malfunctioning placentas, in part owing to maternal immune rejection of the foetus.
Regular unprotected sex can protect women against pre-eclampsia by helping the mother become used to paternal proteins. The fact that pre-eclampsia is human-specific may be related to our exceptionally invasive placenta that burrows deep into the uterine lining, possibly required to build our unusually large brains.
Our other peculiarities are predicted by the most influential evolutionary theory of the last 50 years, the nearly-neutral theory. It states that natural selection is less efficient when a species has few individuals.
A slightly harmful mutation can be removed from a population if that population is large but can increase in frequency, by chance, if the population is small. Most human-specific features evolved when our population size was around 10,000 in Africa prior to its recent (last 20,000 years) expansion. Minuscule compared to, for example, bacterial populations.
This explains why we have such a bloated genome. The main job of DNA is to give instructions to our cells about how to make the proteins vital for life.
That is done by just 1% of our DNA but by 85% of that of our gut-dwelling bacteria Escherichia coli. Some of our DNA is required for other reasons, such as controlling which genes get activated and when. Yet only about 10% of our DNA shows any signs of being useful.
If you have a small population size, you also have more problems stopping genetical errors like mutations. Although DNA mutations can be beneficial, they are more commonly a curse. They are the basis of genetic diseases, be they complex (such as Crohn’s disease and predispositions to cancer), or owing to single gene effects (like cystic fibrosis and Huntington’s disease).
We have one of the highest mutation rates of all species. Other species with massive populations have mutation rates over three orders of magnitude lower, another prediction of the nearly-neutral theory.
A consequence of our high mutation rate is that around 5% of us suffer a “rare” genetic disease.
Modern medicine may help cure our many ailments, but if we can’t do anything about our mutation rate, we will still get ill.The Conversation
Laurence D. Hurst, Professor of Evolutionary Genetics at The Milner Centre for Evolution, University of Bath
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read the whole story
bogorad
2 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Texas Woman Dies From Brain-Eating Amoeba After Flushing Sinuses : ScienceAlert

1 Share
  • The reported case: A Texas woman died from a rare brain infection after using non-sterile water from an RV to flush her sinuses.

  • The cause: The infection was caused by Naegleria fowleri, a brain-eating amoeba.

  • Transmission route: The amoeba entered her brain through her nose during nasal irrigation.

  • Water quality: RV water was suspected to have insufficient disinfectant levels, which could have allowed the amoeba to thrive.

  • Recommended action: The CDC recommends using distilled or sterilized water for nasal irrigation.


A woman in Texas has died from a rare brain infection after flushing her nose with water stored in the tank of a recreational vehicle.

Lab tests on the 71-year-old woman's cerebrospinal fluid confirmed she was infected with Naegleria fowleri, a tiny, free-swimming protozoan also dubbed 'the brain-eating amoeba,' which causes the highly lethal disease primary amoebic meningoencephalitis (PAM).

This killer bug hangs out in warm bodies of fresh water like ponds, lakes, and even neglected swimming pools. Most infections occur while swimming or engaging in water sports in these places.

"The patient had no recreational exposure to fresh water; however, she had reportedly performed nasal irrigation on several occasions using non-boiled water from the RV potable water faucet during the four days before illness onset," a CDC case report details.

"Despite medical treatment for a suspected PAM infection, the patient developed seizures and subsequently died eight days after symptom onset."

Authorities were unable to detect the amoeba in samples from the RV tank or the campground water supply, which may be because they took samples 23 days after the possible exposure took place.

But testing did indicate the water had inadequate levels of disinfectant to prevent microbes from building biofilm communities that can protect pathogens like N. fowleri. The water was more cloudy than what is recommended for drinking water, another sign that disinfectant levels may have been inadequate.

This is why you really shouldn't flush tap (or RV tank) water up your nose for nasal irrigation: the CDC recommends only distilled or sterilized water be used.

Because while N. fowleri likes things lukewarm, it really loves hotter liquids: the kinds sloshing around inside our bodies, for instance.

When it enters a human body, N. fowleri passes a temperature threshold – about 25 degrees Celsius or 77 degrees Fahrenheit – that transforms the bug from its flagellate state into an insatiable trophozoite, a form in which the microbe is actively feeding and reproducing.

Any other entryway to the body, and N. fowleri will be politely escorted out by our immune system or burned up by stomach acids. But via the nose, it has an alarmingly straightforward route to the brain.

This ravenous pathogen chews through the membrane we use to smell with, the olfactory epithelium, and follows the nerve fibers that carry scent signals back to our brains, nibbling at astrocytes and neurons along the way until it reaches its cerebral target.

Most people infected with N. fowleri die within 1 to 18 days after symptoms start. Warning signs include headache, fever, nausea, and vomiting, which can progress to a stiff neck, confusion, loss of balance, and hallucinations.

While this disease is very rare, the prognosis is dire: of the 164 cases reported in the US between 1962 and 2023, only 4 people have survived.

"This case reinforces the potential for serious health risks associated with improper use of nasal irrigation devices, as well as the importance of maintaining RV water quality and ensuring that municipal water systems adhere to regulatory standards," the report states.

The full CDC report is available here.

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

"AI scientist" discovers common drugs that can kill cancer cells - Earth.com

1 Share
  • AI's Role: The research explores AI's potential in generating hypotheses and discovering drug interactions for cancer treatment.

  • Methodology: GPT-4, a large language model, was used to suggest drug combinations for breast cancer, prioritizing affordable, approved drugs.

  • Findings: The AI proposed combinations of non-cancer drugs that proved effective in lab tests, with some showing better results than standard therapies.

  • Synergy: The study found synergistic effects in several drug combinations, meaning they worked better together than individually.

  • Future Implications: The research suggests that AI could reduce the cost of personalized medicine, leading to tailored cancer treatments.


'AI scientist' discovers that common non-cancer drugs, when combined, can kill cancer cells
06-05-2025

'AI scientist' discovers that common non-cancer drugs, when combined, can kill cancer cells

Sanjana Gajbhiye
<a href="http://Earth.com" rel="nofollow">Earth.com</a> staff writer
While AI has already transformed fields like image recognition and translation, researchers are now exploring its potential in discovery-driven tasks, exploring how different drugs interact with cancer cells.
One of the most exciting applications is hypothesis generation, something that was once thought to be the domain of human curiosity alone.
A recent study, led by researchers from the University of Cambridge in partnership with King’s College London and Arctoris Ltd, tested this idea.
Could AI suggest treatments for breast cancer using drugs not originally meant to fight cancer? Could those suggestions lead to real, testable breakthroughs in the lab? The results suggest the answer might be yes.

AI finds new drug ideas for cancer

The research focused on GPT-4, a large language model (LLM) that is trained on vast amounts of internet text. The team designed prompts that asked GPT-4 to generate pairs of drugs that could work against MCF7 breast cancer cells but not harm healthy cells (MCF10A).
They also restricted the model from using known cancer drugs and instructed it to prioritize options that were affordable and already approved for use in humans.
“This is not automation replacing scientists, but a new kind of collaboration,” said Dr. Hector Zenil from King’s College London.
“Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner, rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach.”
In its first round, GPT-4 proposed 12 unique drug combinations. Interestingly, all combinations included drugs not traditionally associated with cancer therapy.
These included medications for conditions like high cholesterol, parasitic infections, and alcohol dependence.
Even so, these combinations were not arbitrary. GPT-4 provided rationales for each pairing, often tying together biological pathways in unexpected ways.

Some combos work well

The next step was to test the drug pairs in the lab. Scientists measured two things: how well each combination attacked MCF7 cells and how much damage was caused to MCF10A cells.
They also evaluated whether the drug pairs worked better together than separately, a property known as synergy.
Three combinations stood out for having better results than standard cancer therapies. One involved simvastatin and disulfiram.
Another paired dipyridamole with mebendazole. A third involved itraconazole and atenolol. These drug pairs were not only effective against MCF7 cells, but they worked without overly harming healthy cells.
“Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn’t thought of before,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research.

AI and humans improve ideas together

Following the first set of results, the researchers asked GPT-4 to analyze what had worked and suggest new ideas.
They shared summaries of the lab findings and prompted the AI to propose four more drug combinations, including some involving known cancer drugs like fulvestrant.
This time, the AI returned combinations such as disulfiram with quinacrine and mebendazole with quinacrine. Three of the four new suggestions again showed promising synergy scores.
One of the most effective combinations was disulfiram with simvastatin, which achieved the highest synergy score of the entire study at over 10 on the HSA scale.
The feedback loop, AI suggesting ideas, humans testing them, then feeding results back to the AI, represents a novel way of conducting science.
The process no longer moves in one direction. Instead, it cycles, with both machine and human adjusting and improving as they learn from each iteration.

AI found surprising cancer combos

Among the twelve original combinations, six showed positive synergy scores for MCF7 cancer cells. These included unusual pairings like furosemide and mebendazole or disulfiram and hydroxychloroquine.
Importantly, eight of these twelve combinations had greater effects on MCF7 cells than on MCF10A cells, indicating good specificity.
Some of the most toxic drugs to MCF7 cells included disulfiram, quinacrine, niclosamide, and dipyridamole. Disulfiram had the lowest IC50 value, meaning it required only a small dose to reduce cell viability.
GPT-4’s ability to find such effective non-cancer drugs and pair them meaningfully surprised even the researchers.
“This study demonstrates how AI can be woven directly into the iterative loop of scientific discovery, enabling adaptive, data-informed hypothesis generation and validation in real time,” said Zenil.

Hallucinations as creative leaps

GPT-4 sometimes makes mistakes. These are called hallucinations, statements not supported by its training data. In most cases, hallucinations are flaws. But in hypothesis generation, they can be productive.
In this study, one such hallucination involved the claim that itraconazole affects cell membrane integrity in human cells.
While this is true for fungi, human cells do not use the same biological pathway. Yet, this flawed idea still led to successful experiments.
“The capacity of supervised LLMs to propose hypotheses across disciplines, incorporate prior results, and collaborate across iterations marks a new frontier in scientific research,” said King.

AI may help tailor cancer treatments

The research team believes that AI and lab automation could eventually reduce the cost of personalized medicine.
Cancer treatment might one day involve a custom research project for each patient. Instead of general prescriptions, therapies could be tested and tailored in near real time.
The cost of running a lab remains high, but AI like GPT-4 drastically cuts down the time and effort required to generate useful hypotheses. With further advances in robotics, the physical testing process might also become cheaper.
“Our empirical results demonstrate that the GPT-4 succeeded in its primary task of forming novel and useful hypotheses,” the authors concluded.

What this means for future science

This study shows that AI can do more than summarize or analyze. It can take part in generating new scientific knowledge.
GPT-4 didn’t just crunch numbers. It offered unexpected ideas, learned from outcomes, and improved its suggestions.
The combinations it proposed still need clinical trials. They are far from becoming approved treatments. But their success in lab settings highlights the potential of repurposing safe, existing drugs for new uses, something that could save years of development time.
The study is published in the Journal of The Royal Society Interface and arXiv.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 
Check us out on EarthSnap, a free app brought to you by Eric Ralls and <a href="http://Earth.com" rel="nofollow">Earth.com</a>.
—–
Read the whole story
bogorad
17 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

DOGE Developed Error-Prone AI to Help Kill Veterans Affairs Contracts — ProPublica

2 Comments
  • ProPublica report: A software engineer with no healthcare experience built an AI tool to identify non-essential contracts at the Department of Veteran Affairs.

  • AI tool's flaws: The tool, using outdated AI models, made numerous errors, misreading contract values and misidentifying essential services.

  • Contract cancellations: The AI flagged numerous contracts for cancellation, some of which were subsequently terminated, including those related to cancer treatment and research.

  • AI limitations: Experts criticized the use of AI for budgetary cuts, highlighting the tool's flawed instructions and inability to understand how the VA operates.

  • Engineer's response: The engineer acknowledged errors in the code and emphasized that human review preceded any contract terminations.


As the Trump administration prepared to cancel contracts at the Department of Veteran Affairs this year, officials turned to a software engineer with no health care or government experience to guide them.

The engineer, working for the Department of Government Efficiency, quickly built an artificial intelligence tool to identify which services from private companies were not essential. He labeled those contracts “MUNCHABLE.”

The code, using outdated and inexpensive AI models, produced results with glaring mistakes. For instance, it hallucinated the size of contracts, frequently misreading them and inflating their value. It concluded more than a thousand were each worth $34 million, when in fact some were for as little as $35,000.

The DOGE AI tool flagged more than 2,000 contracts for “munching.” It’s unclear how many have been or are on track to be canceled — the Trump administration’s decisions on VA contracts have largely been a black box. The VA uses contractors for many reasons, including to support hospitals, research and other services aimed at caring for ailing veterans.

VA officials have said they’ve killed nearly 600 contracts overall. Congressional Democrats have been pressing VA leaders for specific details of what’s been canceled without success.

We identified at least two dozen on the DOGE list that have been canceled so far. Among the canceled contracts was one to maintain a gene sequencing device used to develop better cancer treatments. Another was for blood sample analysis in support of a VA research project. Another was to provide additional tools to measure and improve the care nurses provide.

ProPublica obtained the code and the contracts it flagged from a source and shared them with a half dozen AI and procurement experts. All said the script was flawed. Many criticized the concept of using AI to guide budgetary cuts at the VA, with one calling it “deeply problematic.”

Cary Coglianese, professor of law and of political science at the University of Pennsylvania who studies the governmental use and regulation of artificial intelligence, said he was troubled by the use of these general-purpose large language models, or LLMs. “I don’t think off-the-shelf LLMs have a great deal of reliability for something as complex and involved as this,” he said.

Sahil Lavingia, the programmer enlisted by DOGE, which was then run by Elon Musk, acknowledged flaws in the code.

“I think that mistakes were made,” said Lavingia, who worked at DOGE for nearly two months. “I’m sure mistakes were made. Mistakes are always made. I would never recommend someone run my code and do what it says. It’s like that ‘Office’ episode where Steve Carell drives into the lake because Google Maps says drive into the lake. Do not drive into the lake.”

Though Lavingia has talked about his time at DOGE previously, this is the first time his work has been examined in detail and the first time he’s publicly explained his process, down to specific lines of code.

Lavingia has nearly 15 years of experience as a software engineer and entrepreneur but no formal training in AI. He briefly worked at Pinterest before starting Gumroad, a small e-commerce company that nearly collapsed in 2015. “I laid off 75% of my company — including many of my best friends. It really sucked,” he said. Lavingia kept the company afloat by “replacing every manual process with an automated one,” according to a post on his personal blog.

Lavingia did not have much time to immerse himself in how the VA handles veterans’ care between starting on March 17 and writing the tool on the following day. Yet his experience with his own company aligned with the direction of the Trump administration, which has embraced the use of AI across government to streamline operations and save money.

Lavingia said the quick timeline of Trump’s February executive order, which gave agencies 30 days to complete a review of contracts and grants, was too short to do the job manually. “That’s not possible — you have 90,000 contracts,” he said. “Unless you write some code. But even then it’s not really possible.”

Under a time crunch, Lavingia said he finished the first version of his contract-munching tool on his second day on the job — using AI to help write the code for him. He told ProPublica he then spent his first week downloading VA contracts to his laptop and analyzing them.

VA press secretary Pete Kasperowicz lauded DOGE’s work on vetting contracts in a statement to ProPublica. “As far as we know, this sort of review has never been done before, but we are happy to set this commonsense precedent,” he said.

The VA is reviewing all of its 76,000 contracts to ensure each of them benefits veterans and is a good use of taxpayer money, he said. Decisions to cancel or reduce the size of contracts are made after multiple reviews by VA employees, including agency contracting experts and senior staff, he wrote.

Kasperowicz said that the VA will not cancel contracts for work that provides services to veterans or that the agency cannot do itself without a contingency plan in place. He added that contracts that are “wasteful, duplicative or involve services VA has the ability to perform itself” will typically be terminated.

Trump officials have said they are working toward a “goal” of cutting around 80,000 people from the VA’s workforce of nearly 500,000. Most employees work in one of the VA’s 170 hospitals and nearly 1,200 clinics.

The VA has said it would avoid cutting contracts that directly impact care out of fear that it would cause harm to veterans. ProPublica recently reported that relatively small cuts at the agency have already been jeopardizing veterans’ care.

The VA has not explained how it plans to simultaneously move services in-house, as Lavingia’s code suggested was the plan, while also slashing staff.

Many inside the VA told ProPublica the process for reviewing contracts was so opaque they couldn’t even see who made the ultimate decisions to kill specific contracts. Once the “munching” script had selected a list of contracts, Lavingia said he would pass it off to others who would decide what to cancel and what to keep. No contracts, he said, were terminated “without human review.”

“I just delivered the [list of contracts] to the VA employees,” he said. “I basically put munchable at the top and then the others below.”

VA staffers told ProPublica that when DOGE identified contracts to be canceled early this year — before Lavingia was brought on — employees sometimes were given little time to justify retaining the service. One recalled being given just a few hours. The staffers asked not to be named because they feared losing their jobs for talking to reporters.

According to one internal email that predated Lavingia’s AI analysis, staff members had to respond in 255 characters or fewer — just shy of the 280 character limit on Musk’s X social media platform.

Once he started on DOGE’s contract analysis, Lavingia said he was confronted with technological limitations. At least some of the errors produced by his code can be traced to using older versions of OpenAI models available through the VA — models not capable of solving complex tasks, according to the experts consulted by ProPublica.

Moreover, the tool’s underlying instructions were deeply flawed. Records show Lavingia programmed the AI system to make intricate judgments based on the first few pages of each contract — about the first 2,500 words — which contain only sparse summary information.

“AI is absolutely the wrong tool for this,” said Waldo Jaquith, a former Obama appointee who oversaw IT contracting at the Treasury Department. “AI gives convincing looking answers that are frequently wrong. There needs to be humans whose job it is to do this work.”

Lavingia’s prompts did not include context about how the VA operates, what contracts are essential or which ones are required by federal law. This led AI to determine a core piece of the agency’s own contract procurement system was “munchable.”

At the core of Lavingia’s prompt is the direction to spare contracts involved in “direct patient care.”

Such an approach, experts said, doesn’t grapple with the reality that the work done by doctors and nurses to care for veterans in hospitals is only possible with significant support around them.

Lavingia’s system also used AI to extract details like the contract number and “total contract value.” This led to avoidable errors, where AI returned the wrong dollar value when multiple were found in a contract. Experts said the correct information was readily available from public databases.

Lavingia acknowledged that errors resulted from this approach but said those errors were later corrected by VA staff.

In late March, Lavingia published a version of the “munchable” script on his GitHub account to invite others to use and improve it, he told ProPublica. “It would have been cool if the entire federal government used this script and anyone in the public could see that this is how the VA is thinking about cutting contracts.”

According to a post on his blog, this was done with the approval of Musk before he left DOGE. “When he asked the room about improving DOGE’s public perception, I asked if I could open-source the code I’d been writing,” Lavingia said. “He said yes — it aligned with DOGE’s goal of maximum transparency.”

That openness may have eventually led to Lavingia’s dismissal. Lavingia confirmed he was terminated from DOGE after giving an interview to Fast Company magazine about his work with the department. A VA spokesperson declined to comment on Lavingia’s dismissal.

VA officials have declined to say whether they will continue to use the “munchable” tool moving forward. But the administration may deploy AI to help the agency replace employees. Documents previously obtained by ProPublica show DOGE officials proposed in March consolidating the benefits claims department by relying more on AI.

And the government’s contractors are paying attention. After Lavingia posted his code, he said he heard from people trying to understand how to keep the money flowing.

“I got a couple DMs from VA contractors who had questions when they saw this code,” he said. “They were trying to make sure that their contracts don’t get cut. Or learn why they got cut.

“At the end of the day, humans are the ones terminating the contracts, but it is helpful for them to see how DOGE or Trump or the agency heads are thinking about what contracts they are going to munch. Transparency is a good thing.”

If you have any information about the misuse or abuse of AI within government agencies, Brandon Roberts is an investigative journalist on the news applications team and has a wealth of experience using and dissecting artificial intelligence. He can be reached on Signal @brandonrobertz.01 or by email brandon.roberts@propublica.org.

If you have information about the VA that we should know about, contact reporter Vernal Coleman on Signal, vcoleman91.99, or via email, vernal.coleman@propublica.org, and Eric Umansky on Signal, Ericumansky.04, or via email, eric.umansky@propublica.org.

Read the whole story
bogorad
19 hours ago
reply
manipulative journo-bs, but if you read carefully, it all makes sense.
Barcelona, Catalonia, Spain
Share this story
Delete

How Much Energy Does It Take To Think? | Quanta Magazine

2 Shares
  • Research Focus: Examines the brain's high energy demands and the evolutionary factors shaping its structure.

  • Energy Consumption: The brain utilizes approximately 20% of the body's energy, with infants potentially using up to 50%.

  • Cognitive Effort: Goal-oriented tasks increase brain energy use by only about 5% compared to resting states.

  • Background Activity: A significant portion of brain energy supports maintenance and background processing, including homeostasis.

  • Evolutionary Constraints: The brain's energy efficiency reflects adaptations to environments where energy was scarce.


Introduction

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories