Strategic Initiatives
11768 stories
·
45 followers

Most of ERC Barcelona leadership resigns during budget vote

1 Share
  • Eight of thirteen members of the permanent leadership of ERC's Barcelona federation have resigned.
  • The resigning members accuse the city's ERC president, Creu Camacho, of pursuing an independent and unapproved strategy and making unilateral decisions.
  • The departures include prominent figures like Secretary General Miquel Colomé and Barcelona City Councilor Rosa Suriñach.
  • The mass resignation triggers an internal party mechanism requiring an extraordinary congress within a month to restructure the local leadership.
  • Critics allege that the federation's decisions are being subordinated to the municipal group's interests within Barcelona's City Council, particularly concerning ERC's role in Mayor Collboni's government.

Barcelona has been shaken in the last few hours by a strong internal crisis in ERC, after eight of the thirteen members of the permanent leadership of the Barcelona federation presented their resignations en bloc. The leaders directly point to the party's city president, Creu Camacho, whom they accuse of promoting her "own, unconsented strategy" and of making decisions "unilaterally," outside the party's bodies and membership.

Among the resignees are figures with significant weight within the organization, such as Miquel Colomé, the federation's general secretary, along with Quim Bosch, Nil Font, Agnès Russiñol, Rosa Suriñach —a councilor in Barcelona City Council—, Sheila Vidal, Max Zañartu, and later, Esther Martín as well. The resignations exceed half of the permanent leadership, which automatically activates the party's internal mechanisms and requires the convening of an extraordinary congress within approximately one month to recompose the local structure.

The outgoing leadership's statement denounces a "drift" that, in their opinion, would be subordinating the federation's decisions to the interests of the municipal group in the City Council. The signatories maintain that the political program and the project for which part of the membership supported Camacho and the outgoing team are being sidelined, and they demand that ERC Barcelona regain a more participatory dynamic less dependent on the institutional context.

The situation erupts just a few months after Creu Camacho was elected president of ERC Barcelona with 49.6% of the votes at the regional congress held last April, in a narrow victory against the rival candidacy. After that triumph, Camacho publicly stated that her goal was to "empower the membership" and strengthen the federation's role. However, internal friction has not subsided, especially around the relationship with Jaume Collboni's PSC and ERC's role in the municipal government.

The coincidental timing of the organic crisis with a key moment in Barcelona City Council adds an inevitable political layer. In parallel to the resignations, the council held the plenary session where municipal budgets were put to a vote. The budget proposal received the support of PSC, Barcelona en Comú, and ERC, but municipal arithmetic has forced Mayor Jaume Collboni to resort to a vote of confidence to guarantee its approval due to the lack of a solid majority. If an alternative government is not formed within the established timeframe, the accounts will be automatically approved.

The internal rift within ERC thus coincides with a moment of maximum visibility for the party. On one hand, the critics might have considered this the moment of greatest impact to demonstrate their rejection of the current leadership. On the other hand, Camacho could interpret the extraordinary congress as an opportunity to reinforce her leadership by seeking the support of the membership.

The conflict did not arise out of nowhere. At the congress that awarded leadership to Camacho, the two candidacies already showed deep differences regarding the relationship with Collboni's government and the role ERC should play in Barcelona: whether as a partner for institutional stability or as a more clearly independent and demanding force. The coming weeks will be decisive in determining whether the party manages to regroup or if this crisis will mark the beginning of a new era for the republican space in the Catalan capital.

Read the whole story
bogorad
6 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Critics of Ukraine peace deal must answer: What's the alternative? | Responsible Statecraft

1 Share
  • Diplomatic Shifts: Efforts for a Ukraine war solution involved optimism after a Trump-Putin summit, followed by U.S. sanctions and threats due to no ceasefire.
  • New Peace Plan: The U.S. proposed a 28-point plan with security guarantees from the U.S. and Europe for Ukraine, leaked by a Russian negotiator indicating potential acceptance.
  • U.S. Leadership Role: The plan highlights U.S. diplomatic initiative as essential in addressing the U.S.-West vs. Russia security conflict, given Russia's battlefield advantage.
  • Russian Concessions on EU: Russia accepts Ukraine's EU eligibility and market access, resolving a key 2014 trigger while avoiding NATO issues.
  • Ukrainian Military Size: The plan allows Ukraine a 600,000-troop army, exceeding prior negotiation demands and bolstering it beyond most European forces, plus intervention guarantees.
  • Territorial Provisions: Ukraine withdraws from about 1% of territory in Donetsk as a demilitarized zone, retaining 80% of 1991 borders despite Russian annexations, without full reversal of conquests.
  • Additional Agreements: Russia provides $100 billion in frozen assets for Ukraine's rebuild, plus provisions on religious tolerance, minorities, and anti-Nazi standards aligning with EU norms.
  • Broader Stability: The plan includes Russian non-aggression pledges, G-8 re-entry, U.S.-Russia security group, sanction relief, and nuclear non-proliferation extensions for European peace.

Efforts to find a diplomatic solution to the Ukraine war have followed a dizzying course over the last few months. After an optimistic period around the August Trump-Putin summit in Alaska, the Trump administration, frustrated by the inability to gain an immediate ceasefire, turned back to intensified sanctions and military threats.

Now the U.S. has advanced a new 28-point peace plan and accompanying security guarantees for Ukraine from the U.S. and Europe. Although Russia has not explicitly endorsed the draft, the fact that Russian negotiator Kirill Dimitriev leaked its contents to American media suggests a high degree of Russian acquiescence to the plan. If accepted by Ukraine as well, the plan would pave the way to an immediate ceasefire and long-term settlement of the conflict.

The U.S. move to craft a detailed peace plan recognizes a core reality of the situation, which is the indispensable role of U.S. diplomatic initiative and leadership in ending the war. The Ukraine war represents a multi-domain security conflict between the U.S.-led West and Russia. Thus, many of the security drivers of the conflict cannot be addressed by Ukraine alone. Given that the Russians have the upper hand on the battlefield, they were never likely to agree to a cease-fire without assurances that their core security needs would be addressed along with those of Ukraine.

Not only does this plan do that, it represents the use of U.S. diplomatic leverage to extract major concessions from Russia that support lasting Ukrainian security and independence. Indeed — although it is hardly likely to be described this way in most of the U.S. and European press — it would be fair to describe this plan as a U.S.-mediated victory for Ukraine and for global stability in this long and brutal war. A successful implementation would create a secure and firmly Western-aligned Ukraine on some 80 percent of its 1991 territory.

Russian concessions begin with the core issue that triggered the beginning of the Russia-Ukraine conflict in 2014, namely Ukrainian accession to the European Union. The plan specifies that Ukraine is eligible for EU membership and will receive preferential access to European markets while under consideration. This conclusively signals Ukraine’s political and economic alignment with the West, while still respecting Russian security concerns about NATO. It was precisely the dispute over this issue that triggered the 2014 Maidan uprising, and Russia has now conceded it.

Russian concessions continue in the area of Ukrainian security. During the 2022 Istanbul negotiations in the opening phases of the war, Russia demanded that the Ukrainian military be limited to some 80,000 troops — a number far inadequate to defend against Russian aggression. In those same negotiations Ukraine itself called for a standing army of a quarter of a million troops. Now, the current 28 point plan permits Ukraine a standing army of 600,000 troops, more than twice the level Ukraine itself asked for in the 2022 negotiations and almost eight times the Russian demand.

While this is below the current wartime size of the Ukrainian active duty military, which is over 700,000 troops, that level is almost certainly not sustainable in peacetime. A Ukrainian army of 600,000 troops would be by far the largest military in Europe (outside of Russia). Indeed, it would be larger than the British, French, and German armies combined. Ukraine’s military power would be further bolstered by a separate security guarantee of U.S. and European intervention and support in the event of a Russian attack.

The areas of the agreement most likely to be described as Ukrainian concessions are those that address territory. However, even the territorial provisions contain significant Russian concessions as compared to Russia’s recent demands and certainly compared to Russia’s initial war goal of taking political control of the major areas of Ukraine.

In late 2022, Russia claimed formal annexations of four Ukrainian oblasts in addition to Crimea. In this agreement, it drops demands for Ukrainian withdrawal from unconquered territories in two of these oblasts. Instead, Ukraine would withdraw from so far unconquered territory in only one oblast, Donetsk, territory amounting to about 1% of Ukraine’s 1991 borders. But crucially, in a major concession as compared to Russian positions only a few months ago, Russia will not occupy this region — instead it will be maintained as a demilitarized zone.

Obviously the territorial provisions here would not restore Ukraine’s pre-2014 borders. But this demand has been proven unattainable over four years of bloody war. Leaving territories conquered by the Russians in Russian hands, and the provision for de facto recognition of these territories as Russian, will without question be a bitter pill for Ukraine to swallow. But given that Russia has lost millions of casualties in the war for these territories it was unrealistic to reverse this outcome.

These are far from the only Russian concessions in the document. For example, Russia grants $100 billion in frozen Russian assets to be used toward the rebuilding of Ukraine. Even many provisions that reflect claimed Russian goals, such as religious tolerance (presumably toward the Russian Orthodox Church), tolerance of ethnic minorities, and rejection of Nazi ideology, simply align the values of a Western-aligned Ukraine with broader European Union standards.

The document also creates a road map to a more peaceful and stable Europe, an outcome clearly in U.S. interests. The European security architecture is stabilized through Russian promises of non-aggression, Russian re-entry into the G-8, a U.S.-Russia security working group, and the gradual lifting of economic sanctions on Russia conditional on compliance with core elements of the agreement. A crucial provision states that the U.S. and Russia will “extend treaties on the non-proliferation and control of nuclear weapons.” The details of this conceptual agreement remain to be worked out but it opens the door to broader cooperation on nuclear disarmament.

For all the positive elements of this document, it is unlikely to initially be presented in such a light by many in the mainstream media. With millions of casualties in Europe’s largest war since World War II, a war that began with a Russian invasion of sovereign Ukrainian territory, there is tremendous bitterness toward Russia and tremendous and justified sympathy for Ukraine’s suffering. The reality of Russian gains in territory and the agreement that Ukraine will not join NATO may be allowed to obscure the reality of major Russian concessions and the ways in which the agreement would benefit both Ukrainian and U.S. interests. Those who have for years insisted on maximalist demands for complete reconquest of Ukraine’s territory and even a long war for regime change in Russia will find it difficult to support an agreement they regard as imperfect.

But the alternatives are much worse — especially for Ukraine, which is roiled by internal crises, teetering on the brink of economic collapse, and facing a grim battlefield situation in the Donbas and parts of the southeast. A secure and firmly Western-aligned Ukraine on 80% of its pre-war territory is a far better outcome than the terms on which this war will likely end if it grinds into 2026 or beyond.

Read the whole story
bogorad
8 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Journalism Standards, Anyone? - John Kass

1 Share
  • BBC Influence: Reaches nearly half a billion people weekly worldwide, over 50 million in the US, regarded as most influential news organization.
  • Trump Speech Edit: BBC documentary spliced two separate Trump remarks from Jan. 6, 2021, omitting "peacefully and patriotically" to suggest incitement of Capitol riot before 2024 election.
  • Edit Timeline: "Fight like hell" comment was 54 minutes after initial remarks; Proud Boys marched three hours before Trump spoke.
  • Revelation Consequences: External adviser's findings led to resignations of BBC director general Tim Davie and news head Deborah Turness; Trump threatened $5 billion lawsuit, received apology but no compensation.
  • Second Edit Incident: 2022 BBC broadcast featured another deceptive Trump speech splice, ignored by presenter despite real-time objection from Mike Mulvaney.
  • Middle East Coverage Bias: BBC portrayed Israel as aggressor, used unverified Hamas casualty figures, platformed anti-Israel figures with extreme views.
  • Specific Breaches: False hospital strike report, misleading famine photos from congenital diseases, documentary narrated by son of Hamas official justified as separate political arm.
  • Broader Implications: BBC staff defenses include claims of minor edits and greater truth; parallels US media like 60 Minutes editing; results in lost public trust and shift to advocacy over journalism.

By Cory Franklin

November 21st, 2025

Kass readers may not regularly follow the daily machinations of the British Broadcasting Corporation, commonly referred to as the BBC (or more informally, Auntie Beeb). The BBC reaches close to half a billion people worldwide each week and more than 50 million in the US.

It is regarded in many circles as the most influential news organization in the world. However, recent important developments at the BBC, involving America and Donald Trump, merit our interest here, because they are particularly instructive about the sad state of journalism in general.

First the bombshell: Several days before the 2024 presidential election, a BBC documentary broadcast part of a Jan. 6 Trump speech in which he says, “We’re going to walk down to the Capitol and I’ll be there with you and we fight. We fight like hell.”

The clear suggestion was that Trump fomented the 2021 Capitol riot with his words and was not fit to be elected.

Except Trump never said those specific words in any single quote.  This was a deceptive edit, made by splicing two separate remarks, spoken nearly an hour apart. What Trump actually said in the first part was that people should march “peacefully and patriotically” to the Capitol and “make your voices heard.”

The “peacefully” comment was deleted and replaced it with a “fight like hell” (referring to fighting against corruption) comment made 54 minutes later. The edit clearly made it look like he was marshalling the mob for a frenzy of violence, which was reinforced by follow-up footage of the Proud Boys marching on the Capitol. It certainly looked like they were responding to Trump’s exhortation, when its reality they began their march three hours before Trump spoke.

While top BBC executives were aware of this edit when it was broadcast, it took until last month for an external adviser, appointed to the BBC editorial standards committee, to release the findings publicly.

One of the principal BBC editorial guidelines, and supposed hallmarks of the taxpayer-funded company, is impartiality – not favoring one side over another.

Yet, there was no way to avoid the fact that this edit was an attempt to impugn Trump during an election campaign – a blatant violation of impartiality.

As Daily Telegraph columnist Janet Daley wrote, this was “a professionally crafted editing job which has to have been designed to produce a calculated effect for a political purpose.”

The news of the misrepresentation was devastating: the revelation forced the resignation of the BBC director general Tim Davie and head of news Deborah Turness.    This would be the equivalent of the resignation of the president of CBS and the editor-in-chief of the news division. Trump has threatened a $5 billion lawsuit, and demanded a retraction and an apology. So far, the BBC has offered an apology but no compensation to Trump.

It gets worse for the BBC:  the editorial standards report also documented a second deceptive Trump speech edit that was aired in 2022.

Former White House chief of staff Mike Mulvaney pointed out the misleading splicing while he was on the air at the time to the BBC news presenter, who simply ignored his remarks and forged ahead.

If that were not enough to destroy the fiction of impartiality, the external adviser cited blatant bias favoring Hamas in the BBC’s coverage of the current Middle East war.

The BBC repeatedly portrayed Israel as the aggressor, uncritically reported Hamas casualty figures, and platformed anti-Israel journalists including one who posted about Jews on Facebook, “We shall burn you as Hitler did, but this time we won’t have a single one of you left.” In 2022 he posted, “When things go awry for us, shoot the Jews, it fixes everything.”

Like a referee who consistently makes bad calls in favor of one team, during the Middle East War the worst BBC breaches of journalistic integrity, which they were forced to apologize for, were all at the expense of Israel.

According to the external standards memo, charges were “raced to air” without adequate checks, suggesting either carelessness or “a desire always to believe the worst about Israel.” Early in the war a BBC report that hundreds were killed in an Israeli strike on a hospital turned out to be false.

Last year’s famine narrative in Gaza was highlighted by photos of “starving children” whose emaciation was the result not of starvation but of congenital diseases.

And a widely viewed documentary of wartime life in Gaza was narrated by a 14-year-old boy, who turned out to be the son of a high-ranking Hamas official.

When this was discovered, and the BBC documentary was pulled from view, Deborah Turness tried to justify the obvious breach of standards by claiming the political arm of Hamas is separate from the military arm. This didn’t pass the smell test.

Since Davie and Turness have resigned, there has been a “circle the wagons” mentality among many BBC staff and outside British journalists with likeminded left-leaning views.

The defenses against breaches of impartiality and the denial of contravening journalistic standards are a roundup of the usual suspects,  familiar to us here in America: our edits of videos are minor and standard journalistic practice; your evidence is cherrypicked; gosh, how could our attacks be coordinated and politically motivated (even suggesting that reliable old standby, “the vast, rightwing conspiracy”); and while the supposed facts we report may be untrue or manipulated, they are nevertheless an expression of the spirit of some greater truth.

Anyone remember hearing that excuse when Dan Rather of CBS provided a patently bogus memo suggesting that George W. Bush had dodged National Guard duty?

The BBC partisanship is certainly bad, especially in view of the purported longstanding commitment to impartiality –that vaunted integrity is supposed to be the sine que non of the organization. But abandoning the most basic journalistic standards by so many in the community, not just those at the Beeb, is even worse.

This is the BBC – if it can happen there, it can happen anywhere.  And it has: recall how, during the presidential campaign, 60 Minutes edited a Kamala Harris word salad interview to avoid making her look inept? Same melody, different lyrics.

The reality is the BBC and many American outlets have ceased being journalistic institutions that cover politics. They have clearly become political advocates that occasionally delve into journalism.

The worst part is that they have gone so far down a dark rabbit hole that they have lost public trust. Their vision has become so impaired that no amount of light, however bright, will convince them otherwise.

For decades, the legendary journalistic maxim of Chicago’s now defunct City News Bureau was “If your mother says she loves you, check it out.” The editor who supposedly authored the quote claims he was misquoted, and what he actually said was, ‘If your mother tells you she loves you, kick her smartly in the shins and make her prove it.’”  It has reached the point that if a journalist tells you something, it’s her shins or his balls that deserve the smart kick.

-30-

Dr. Cory Franklin

Cory Franklin, physician and writer, is a frequent contributor to <a href="http://johnkassnews.com" rel="nofollow">johnkassnews.com</a>. Director of Medical Intensive Care at Cook County (Illinois) Hospital for 25 years, before retiring he wrote over 80 medical articles, chapters, abstracts, and correspondences in books and professional journals, including the New England Journal of Medicine and JAMA. In 1999, he was awarded the Shubin-Weil Award, one of only fifty people ever honored as a national role model for the practice and teaching of intensive care medicine. 

Since retirement, Dr. Franklin has been a contributor to the Chicago Tribune op-ed page. His work has been published in the New York Times, New York Post, Washington Post, Chicago Sun-Times and excerpted in the New York Review of Books. Internationally, his work has appeared internationally in Spiked, The Guardian and The Jerusalem Post. For nine years he hosted a weekly audio podcast, Rememberingthepassed, which discusses the obituaries of notable people who have died recently. His 2015 book “Cook County ICU: 30 Years Of Unforgettable Patients and Odd Cases” was a medical history best-seller. In 2024, he co-authored The COVID Diaries: Anatomy of a Contagion As it Happened.

_In 1993, he worked as a technical advisor to Harrison Ford and was a role model for the physician character Ford played in the film, The Fugitive.
_

Read the whole story
bogorad
8 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

World Labs Founder Fei-Fei Li Wants AI to Be More Democratized

1 Share
  • AI's Current Surge: Fei-Fei Li describes the rapid advancements and investments in AI since ChatGPT's release as a civilizational technology with profound impacts on lives and work.
  • Personal Background: Li emigrated from China to the US at age 15, facing language barriers and financial struggles, while running her family's dry-cleaning business through college.
  • Shift to AI: Initially inspired by physics for its vast scope, Li's curiosity led her to question intelligence and pursue building intelligent machines during college.
  • ImageNet Breakthrough: In 2006, Li developed ImageNet, a vast visual database with millions of images, drawing from psychology and linguistics to organize object recognition like human semantic structures.
  • Spatial Intelligence Focus: As co-founder of World Labs, Li advances spatial intelligence, enabling AI to perceive and interact with 3D worlds, complementary to language models, for applications in design, robotics, and education.
  • AI's Societal Impacts: Li acknowledges AI's double-edged nature, predicting job shifts but emphasizing upskilling, responsible development, and democratization beyond a few tech companies.
  • Risks and Responsibilities: Disagreeing with extinction fears from experts like Geoffrey Hinton, Li stresses human governance, regulation, and agency to prevent misuse, while highlighting benevolent potentials in healthcare and education.
  • Advice for Future: Li urges parents to foster human values like curiosity, critical thinking, and integrity in children, preparing them to use AI responsibly without over-relying on it.

By Mishal Husain

November 21, 2025 at 1:00 AM EDT

Share this article

  • [](mailto:?subject=World%20Labs%20Founder%20Fei-Fei%20Li%20Wants%20AI%20to%20Be%20More%20Democratized&body=Stanford%20scientist%20Fei-Fei%20Li%20talks%20about%20teaching%20machines%20to%20see%20as%20humans%20do%2C%20the%20US-China%20AI%20arms%20race%2C%20and%20what%20worries%20her%20about%20a%20more%20automated%20future.%0D%0Ahttps://www.bloomberg.com/features/2025-fei-fei-li-weekend-interview%0D%0Avia Bloomberg.)

AI is now so present in our lives that the story of how it came to be so is receding — that is, if we ever absorbed it properly. It’s a tale of scientists laboring for years in the hope of one day making machines intelligent, and breaking down the components of human intelligence in order to get there.

Stanford University professor Fei-Fei Li was at the forefront of that quest, which is why she has been called the “godmother of AI.” In 2006 she released her academic work on a visual database containing millions of images, and the idea of training computers to “see” as humans do sparked a wave of AI development.

Behind this breakthrough is a woman with an unusual background, one that plays a role in how she sees the world. Li arrived in the US at age 15 when her parents emigrated from China. She spoke little English and had to adjust academically, socially and financially to a new environment; after her parents set up a small dry-cleaning business to make ends meet, she ran it through her college years.

When Li came into Bloomberg headquarters in London, we talked about her personal and professional history, and I found in her a deep sensitivity. Excited about the potential of technology she’s helped create, she also emphasizes human agency — you’ll find her message to parents towards the end.

Listen to and follow The Mishal Husain Show on iHeart Podcasts, Apple Podcasts, Spotify or wherever you get your podcasts.

The Mishal Husain Show: Fei-Fei Li (Podcast)

This conversation has been edited for length and clarity. You can listen to an extended version in the latest episode of The Mishal Husain Show podcast.

May I start with this remarkable period for your industry? It’s been three years since ChatGPT was released to the public. Since then there have been new vehicles, new apps and huge amounts of investment. How does this moment feel to you?

AI is not new to me. I’ve been in this field for 25 years. I’ve lived and breathed it every day since the beginning of my career. Yet this moment is still daunting and almost surreal to me, in terms of its massive, profound impact.

This is a civilizational technology. I’m part of the group of scientists that made this happen and I did not expect it to be this massive. 1

1 Li spoke to us while in London to receive the 2025 Queen Elizabeth Prize for Engineering, alongside Nvidia CEO Jensen Huang and five others. Li has previously written and spoken about what she’s called “the AI winter” of the early 21st century, when those working in the field were getting no attention.

When was the moment it changed? Is it because of the pace of developments or because the world has woken up and therefore turned the spotlight on people like you?

I think it’s intertwined, right? But for me to define this as a civilizational technology is not about the spotlight. It’s not even about how powerful it is. It is about how many people it impacts.

Everyone’s life, work, wellbeing, future, will somehow be touched by AI.

AI Investment Surge

Capital spending on AI-related activities exceeds half of all investment

Note: Information processing equipment + software + research and development as a share of nonresidental investment Source: Bureau of Economic Analysis, Bloomberg calculations

In bad ways as well as good?

Well, technology is a double-edged sword, right? Since the dawn of human civilization, we have created tools we call technologies, and these tools are meant, in general, for doing good things. Along the way we might intentionally use them in the wrong way, or they might have unintended consequences.

You said the word powerful. The power of this technology is in the hands of a very small number of companies, most of them American. How does that sit with you?

You are right. The major tech companies — through their products — are impacting our society the most. I would personally like to see this technology being much more democratized.

No matter who builds, or holds, the profound impact of this technology: Do it in a responsible way.

I also believe every individual should feel they have the agency to impact this technology.

You are a tech CEO as well as an academic. Your very young company, little more than a year old, is reportedly already worth a billion dollars.

Yes! [Laughs]

I am co-founder and CEO of World Labs. We are building the next frontier of AI — spatial intelligence — which people don’t hear too much about today because we’re all about large language models. I believe spatial intelligence is as critical [as] — and complementary to — language intelligence. 2

2 World Labs raised more than $200 million ahead of its launch in 2024. In a TED Talk she delivered that year, Li said: “If we want to advance AI beyond its current capabilities, we want more than AI that can see and talk. We want AI that can do.”

I know that your first academic love was physics.

Yes.

What was it in the life or work of the physicists you most admired that made you think beyond that particular field?

I grew up in a small, or less well known, city in China. And I come from a small family. So you could say life was small, in a sense. My childhood was fairly simple and isolated. I was the only child. 3

3 Li grew up in Chengdu in China’s Sichuan Province; her mother was a teacher and her father worked in the computer department of a chemicals factory. In her book The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI, she linked her professional path to her early years: “Research triggered the same feeling I got as a child exploring the mountains surrounding Chengdu with my father, when we’d spot a butterfly we’d never seen before, or happen upon a new variety of stick insect.”

Physics is almost the opposite — it’s vast, it’s audacious. The imagination is unbounded. You look up in the sky, you can ponder the beginning of the universe. You look at a snowflake, you can zoom into the molecular structure of matter. You think about time, magnetic fields, nuclear.

It takes my imagination to places that you can never be in this world. What fascinates me to this day about physics is to not be afraid of asking the boldest, [most] audacious questions about our physical world, our universe [and] where we come from.

But your own audacious question, I think, was What is intelligence?

Yes. Each physicist I admire, I look at their audacious question, right from [Isaac] Newton to [James Clerk] Maxwell to [Erwin] Schrödinger to Einstein — my favorite physicist.

I wanted to find my own audacious question. Somewhere in the middle of college, my audacious question shifted from physical matters to intelligence. What is it? How does it come about? And most fascinatingly, How do we build intelligent machines? That became my quest, my north star.

Big ideas and open questions in the fascinating places where finance, life and culture meet.

Get the Bloomberg Weekend newsletter.

Big ideas and open questions in the fascinating places where finance, life and culture meet.

[

Sign Up

](https://www.bloomberg.com/account/newsletters/weekend)

By continuing, I agree to the Privacy Policy and Terms of Service.

And that’s a quantum leap because, from machines that were doing calculations and computations, you’re talking about machines that learn, that are constantly learning.

I like [that] you use the physics pun, the quantum leap.

Around us right now, there are multiple objects. We know what they are. The ability that humans have to recognize objects is foundational. My PhD dissertation was to build machine algorithms to recognize as many objects as possible.

What I found really interesting about your background is that you were reading very widely. And your ultimate breakthrough was possible when you started thinking about what psychologists and linguists were saying that was related to your field.

That’s the beauty of doing science at the forefront. It’s new, no one knows how to do it.

It’s pretty natural to look at the human brain and human mind and try to understand, or be inspired by, what humans can do. One of the inspirations in my early days of trying to unlock this visual intelligence problem was to look at how our visual semantic space is structured. There are so many tens and thousands, millions, of objects in the world. How are they organized? Are they organized by alphabet, or by size or colors? 4

4 While doing her PhD at Caltech, Li became convinced that larger datasets would be crucial to AI progress. Later, she was influenced by neuroscientist and psychologist Irving Biederman’s paper on human image understanding, which estimated that the average person recognizes around 30,000 different kinds of objects.

Are you asking that because you have to understand how our brains organize in order to teach the computers?

That’s one way to think about it.

I came upon a linguistic work called WordNet, a way to organize semantic concepts — not visuals, just semantics or words — in a particular taxonomy. 5

5 After Biederman’s paper, this was the second influential piece of work that came from outside Li’s own academic field.

Give me an example.

In a dictionary, apple and appliance are very close together, but in real life an apple and a pear are much closer to each other. The apple and pear [are] both fruits, whereas an appliance belongs to a whole different family of objects.

I made a connection in my head. This could be the way visual concepts are organized, because apples and pears are much more connected than apples and washing machines.

But also, even more importantly, the scale. If you look at the number of objects described by language, you realize how vast it is, right? And that was a particular epiphany for me. We, as intelligent animals, experience the world with a massive amount of data — and we need to endow machines with that ability.

It’s worth noting that at that time — I think this is the early part of the century — the idea of big data didn’t really exist.

No, this phrase did not even exist. The scientific data sets we were playing with were tiny. 6

6 “Big data” was coined in the 1990s, but the term didn’t become commonplace until the 2010s. It may now seem obvious that large datasets are crucial to machine learning, but it wasn’t always. After all, children can learn complex rules from just a few examples. As it turns out, modern AI's performance hinges on the amount of data available to it.

How tiny?

In images, most of the graduate students in my era were playing with data sets of four or six — at most 20 — object classes. Fast forward three years, after we created ImageNet; it had 22,000 classes of objects and 15 million annotated images. 7

7 A group led by Li first released the ImageNet database in 2006. It has been credited with significantly advancing the ways computers recognize objects. Crucially, Li also created a competition where teams from around the world trained algorithms to classify images in a subset of the dataset. That, in turn, allowed a team led by Geoffrey Hinton at the University of Toronto to demonstrate the power of a seemingly antiquated technique: the neural network.

ImageNet was a huge breakthrough. It’s the reason that you’ve been called “the godmother of AI.” I’d love to understand what enabled you to make these connections and see things other scientists didn’t. You acquired English as a second language after you moved to the US. Is there something in that which led to what you’re describing?

I don’t know. Human creativity is still such a mystery. People talk about AI doing everything and I disagree. There’s so much about the human mind that we don’t know. So I can only conjecture that a combination of my interest and my experiences led to this.

I’m not afraid of asking crazy questions in science, not afraid of seeking solutions that are out of the box. My appreciation for the linguistic link with vision might be accentuated by my own journey of learning a different language. 8

8 Without any scientific evidence, I do wonder if Li’s background — and especially her acquisition of a new language in the crucial last three years before college — is linked to her pioneering work with ImageNet. She had to work so hard to understand her new world of the US, culturally as well as linguistically, and to fit all of her learnings together. To me, it would follow that she thought and read widely and was also attracted to ways to organize information in order to learn better.

King Charles III (top row, 2nd left) with the recipients of the 2025 Queen Elizabeth Prize for Engineering, including Professor Geoffrey Hinton (top right), Nvidia CEO Jensen Huang (bottom left) and Li (bottom center).

King Charles III (top row, 2nd left) with the recipients of the 2025 Queen Elizabeth Prize for Engineering, including Professor Geoffrey Hinton (top right), Nvidia CEO Jensen Huang (bottom left) and Li (bottom center). Photographer: Yui Mok/Pool/Getty Images

What was it like coming to the United States as a teenager? That’s a particularly difficult stage in life to move and make new friends, even if you weren’t battling a language barrier.

That was hard. [Laughs]

I came to America when I was 15 — [to] Parsippany, New Jersey. None of us spoke much English. I was young enough to learn more quickly [but] it was very hard for my parents.

We were not financially very well [off] at all. My parents were doing cashier jobs and I was doing Chinese restaurant jobs. Eventually, by the time I entered college, my mom’s health was not so good. So my family and I decided to run a little dry cleaner shop to make some money to survive.

You got involved yourself?

I joke that I was the CEO. I ran the dry cleaner shop for seven years, from [age] 18 to the middle of my graduate school.

Even at a distance you ran your parents’ dry cleaning business?

Yes. I was the one who spoke English. So I took all the customer phone calls, I dealt with the billing, the inspections, all the business.

What did it teach you?

Resilience.

As a scientist, you have to be resilient because science is a non-linear journey. Nobody has all the solutions, you have to go through such a challenge to find an answer. And as an immigrant, you learn to be resilient.

Were your parents pushing you? They clearly wanted a better life for you; that’s why they left China for the United States. How much of this was coming from them, or from your own sense of responsibility to them?

To their credit, they did not push me much. They’re not “tiger parents,” in today’s language. They were just trying to survive, to be honest.

My mom is an intellectual at heart, she loves reading. But the combination of a tough, survival-driven immigrant’s life plus her health issues — she was not pushing me at all. As a teenager, I had no choice. I either had to make it or not make it; the stakes are pretty high. So I was pretty self-motivated.

I was always a curious kid and then my curiosity had an outlet, which was science — and that really grounded me. I wasn’t curious about night clubs or other things, I was an avid lover of science.

You also had a teacher who was really important. Tell me about him.

I excelled in math and I befriended Mr. Bob Sabella, the math teacher. We became friends through the mutual love of science fiction. At that time I was reading in Chinese; eventually I started reading English.

He probably saw in me a desire to learn. So he went out of his way to create opportunity for me to continue to excel in math. I remember I placed out of the highest curriculum of math and so there were no more courses. He would use his lunch hour to create a one-on-one class for me, and now that I’m a grownup I know he was not paid extra.

It was really a teacher’s love and sense of responsibility. He became an important person in my life. 9

9 This isn’t the first time a beloved teacher has come up in a Weekend Interview: Playwright James Graham told me that he credits his childhood drama teacher, Mr. Humphrey, with inspiring his career — and even named a character after him.

Is he still alive?

Mr. Sabella passed away while I was an assistant professor at Stanford. But his family — his two sons, his wife — they’re my New Jersey family.

You use the word love. Did he and his family introduce you to American society, the world of America beyond your school?

Absolutely. They introduced me to the quintessential middle-class American family in a suburban house. It was a great window for me: to know the society, to be grounded, to have friends and have a teacher who cared. 10

10 Sabella and his wife lent Li money for college and continued to help her parents with the dry cleaning business while she was away. “In many ways, he provided what felt like a missing piece in my relationships with both of my parents,” Li wrote in her book. “My mother had always inspired me, but she didn’t actually share my interests when it came to math and physics … And while my father’s influence was nearest to my heart — he was the first to encourage my curiosity about the natural world, and my introduction to physics — I had to acknowledge I’d long since outgrown his example.”

Do you think you could have had this career in China?

I don’t think I’m able to answer this question because life is so serendipitous, right? The journey would be very different. But what is timeless or invariant is the sense of curiosity, the pursuit of north stars. So I would still be doing AI somehow, I believe.

Do you still feel connected to China?

It’s part of my heritage. I feel very lucky that my career has been in America. The environment my family is in right now — Stanford, San Francisco, Silicon Valley — is very international. The discipline I’m in, AI, is so horizontal. It touches people everywhere. I do feel much more like a global citizen at this point.

It is a global industry, but there are some really striking advances in China: the number of patents, the number of AI publications coming out, the Deepseek moment earlier this year. As you look ahead, do you think China will catch up with the US in the way that it has in other fields like manufacturing? 11

11 The US and China are effectively alone at the top of the AI race, with most top labs located in these two countries. While much of the world’s attention is focused on Western tech companies, Bloomberg reported last month that Chinese rivals are courting Africa’s startups and innovation hubs with open-source AI models, in a strategy “with parallels to China’s Belt and Road Initiative for physical infrastructure.”

I do think China is a powerhouse in AI.

At this moment, most people would recognize the two leading countries in AI are China and [the] US. The excitement, the energy, and frankly the ambition of many regions and countries, of wanting to have a role in AI, wanting to catch up or come ahead in AI — that is pretty universal.

And your own next frontier. Tell me what you mean when you say “spatial intelligence.” What are you working on?

Spatial intelligence is the ability for AI to understand, perceive, reason and interact [with the world]. It comes from a continuation of visual intelligence.

The first half of my career — around the ImageNet time — was trying to solve the fundamental problem of understanding what we are seeing. That’s a very passive act. It’s receiving information, being able to understand: This is a cup. This is a beautiful lady. This is a microphone. But if you look at evolution, human intelligence — perception — is profoundly linked to action. We see, because we move. We move, therefore we need to see better.

How you create that connection has a lot to do with space — because you need to see the 3D space. You need to understand how things move. You need to understand, When I touch this cup, how do I mathematically organize my fingers so that it creates a space that would allow me to grab this cup?

All this intricacy is centered around this capability of spatial intelligence. 12

12 This idea that perception is fundamentally organized around action originates in the work of the psychologist James J. Gibson, who argued that organisms do not passively receive visual information but actively detect the possibilities for action. “We perceive in order to move, and we move in order to perceive,” he said. The Stanford Vision and Learning Lab’s 3D simulation platform is named The Gibson Environment to acknowledge his influence.

On your website, I’ve seen the preview you’ve released — essentially a virtual world. Is it, to you, a tool for training AI?

So let’s just be clear with the definition. Marble is a frontier model. What’s really remarkable is it generates a 3D world at a simple prompt. That prompt could be, Give me a modern-looking kitchen. Or, Here’s a picture of a modern-looking kitchen. Make it a 3D world.

The ability to create a 3D world is fundamental to humans. I hope one day it’s fundamental to AI. If you are a designer or architect, you can use this 3D world to ideate, to design. If you are a game developer, you can use it to obtain these 3D worlds, so that you can design games.

If you want to do robotic simulation, these worlds will become very useful as training data for robots or evaluation data. If you want to create immersive, educational experiences in AR [or] VR, this model would help you to do that. 13

13 There are several companies working on different flavors of this idea, including Google DeepMind. Li outlined her vision for “spatial intelligence” as AI’s next frontier in a recent essay.

A screenshot of a world created by Marble, an AI model that produces 3D environments based on a text or image prompt.

A screenshot of a world created by Marble, an AI model that produces 3D environments based on a text or image prompt. Source: Marble

Interesting. I’m imagining girls in Afghanistan, virtual classrooms in a very challenged place.

Yes. Or, how do you explain to [an] 8-year-old, What is a cell? One day we’ll create a world that’s inside a cell. And then the student can walk into that cell and understand the nucleus, the enzymes, the membranes. You can use this for so many possibilities. 14

14 As Li said these words I was remembering school biology exams and drawing and labeling the structure of a cell. I can imagine a virtual world inside a cell creating a strong mental image for a younger child, one that is not easily forgotten.

Yours is a big, complex industry, but there are some immediate issues. Can I put a selection of those to you, for an instinctive response on how you see them. Number one: Is AI going to destroy large numbers of jobs?

Technologies do change the landscape of labor. A technology as impactful and profound as AI will have a profound impact on jobs. 15

15 The picture on AI and jobs is complicated and uncertain: Anecdotally, some companies claim they’re already using it to automate workespecially at startups — but it’s not yet visible in economy-wide jobs data, at least in the US. However, it may already be taking away entry-level jobs in white-collar professions, and some economists worry it could hurt workers longer-term.

Which is happening already. Salesforce CEO Marc Benioff has said 50% of the company’s customer-support roles are going to AI.

So it will, there’s no question. Every time humanity has created a more advanced technology — for example, steam engines, electricity, the PC, cars — we have gone through difficult times. But we have also gone through [a] re-landscaping of jobs.

Only talking about the number of jobs, bigger or smaller, doesn’t do justice. We need to look at this in a much more nuanced way and really think about how to respond to this change.

There is the individual-level responsibility — you’ve got to learn, upskill yourself. There is also company-level responsibility, societal-level responsibility. This is a big question.

Number two is an even bigger question. You’ll know Professor Geoffrey Hinton, a Nobel Laureate whose work has overlapped with yours. He thinks there’s a 10% to 20% chance that AI leads to human extinction. 16

16 So-called AI doomers worry that humans don’t know how to ensure alignment between AI and human goals. As the technology gets smarter, they warn, it will develop the ability to evade instructions and pursue its own self-preservation — possibly at our expense.

First of all Professor Hinton, or as I call him, Geoff — because I have known him for 25 years — he’s someone I admire and we talk a lot.

But this thing about replacing the human race, I respectfully disagree. Not in the sense that it’ll never happen. In the sense that if the human race became really in trouble, that would be the result — in my opinion — of humans doing wrong things, not machines doing wrong things.

But the very practical point he makes: How do we prevent the superintelligent creation from taking over, when it becomes more intelligent than us? We have no model for that, he says.

I think this question has made an assumption. We don’t have such a machine yet. We still have a distance, a journey to take, to that day.

My question is, Why would humanity as a whole allow this to happen? Where is our collective responsibility? Where is our governance or regulation? 17

17 It’s worth watching this interview with Hinton by Bloomberg’s David Westin. Raising the question of how we prevent AI from taking over, Hinton said companies and governments are making bad assumptions. “Their basic model is: ‘I’m the CEO, and this superintelligent AI is the extremely smart executive assistant. I’m the boss and I can fire the executive assistant if she doesn’t do what I want,’” Hinton says. “It’s not going to be like that when it’s smarter than us and more powerful than us.”

Do you think there’s a way to make sure that there is an upper limit to superintelligence?

I think there is a way to make sure there is responsible development and usage of technology internationally.

At a government level, a treaty? Or just companies agreeing to behave in a certain way?

The field is so nascent that we don’t yet have international treaties, we don’t yet have that level of global consent. I think we have global awareness.

I do want to say that we shouldn’t [overthink] the negative consequences of AI. This technology is powerful. It might have negative consequences; it also has a ton of benevolent applications for humanity. We need to look at this holistically.

I know you talk to politicians a lot. You’ve done that in the US, in the UK, in France and elsewhere. What’s the most common question they ask that you find frustrating?

I wouldn’t use the word frustrating, I would use the word concerned, because I think our public discourse on AI needs to move on from the question of, What do we do when [our] machine overlord is here? 18

18 Li is too polite to tell me she found my question on extinction frustrating, but I suspect she did. I’ve generally found such suggestions alarmist, but hearing someone as eminent as Hinton address them made me reconsider, and it’s easy to imagine Li fielding similar worries from government officials. She's been advising California Governor Gavin Newsom on “workable guardrails” for AI after he vetoed a contentious AI safety bill, and she’s also spoken to multiple heads of state.

Another question I get asked a lot is from parents, worldwide: AI is coming, how do I advise my kids? What’s the future for my kids? Should they study computer science? Are they going to have jobs?

What do you say?

I say that AI is a very powerful technology and I’m a mother. The most important way we should empower our kids is as humans, with agency, with dignity. With the desire to learn and timeless values of humanity: Be an honest person, a hardworking person, creative, critically thinking.

Don’t worry about what they’ll study?

Worry is not the right word. Be totally informed. Understand that your children’s future is living in the world of AI technology. Depending on their interest, their passion, personality, circumstances, prepare them. Worry doesn’t solve the problem.

I’ve got another industry question, about the huge sums of money flowing into companies like yours. Whether this might be a bubble, like the dot-com bubble, and some of these companies are overvalued. 19

19 Concerns around the scale of AI spending have been growing, including the circular nature of some of the bigger deals and the US economy’s exposure to the sector. Earlier this month, Michael Burry (of Big Short fame) revealed bearish wagers against Nvidia — the world’s largest provider of AI chips — and Peter Thiel’s hedge fund sold its entire stake in Nvidia last quarter.

First of all, my company is still a startup.

When we’re talking about huge amounts of money, we really look at Big Tech. AI is still a nascent technology and there’s still a lot to be developed. The science is very hard. It takes a lot to make breakthroughs. This is why resourcing these efforts [is] important.

The other side of this is the market: Are we going to see the payoff? I do believe that the applications of AI are so massive — software engineering, creativity, healthcare, education, financial services — that we’re going to continue to see an expansion of the market. There are so many human needs in wellbeing and productivity, that can be helped by AI, as an assistant [or] collaborator. And that part, I do believe strongly, is an expanding market.

But what does it cost in terms of power, energy and therefore climate? There’s a prominent AI entrepreneur, Jerry Kaplan, who says that we could be heading for a new ecological disaster because of the amount of energy consumed by vast data centers.

This is an interesting question. In order to train large models, we are seeing more and more need for power, or energy, but nobody says these data centers must be powered by fossil fuels. Innovation on the energy side will be part of this.

An Amazon Web Services data center in Manassas, Virginia. Total power usage in the US is expected to climb 2.15% in 2026, spurred largely by a 5% spike from commercial users because of the expansion of data centers, according to a US Energy Department report.

An Amazon Web Services data center in Manassas, Virginia. Total power usage in the US is expected to climb 2.15% in 2026, spurred largely by a 5% spike from commercial users because of the expansion of data centers, according to a US Energy Department report. Photographer: Nathan Howard/Bloomberg

The amount of power they need is so enormous; it’s hard to see it coming from renewable energy alone.

Right now this is true. Countries that need to build these big data centers need to also examine energy policy and industry. This is an opportunity for us to invest in and develop more renewable energy.

You’re painting a very positive picture. You’ve been at the forefront of this and you see much more potential, so I understand where you’re coming from. But what worries you about your industry?

I’m not a tech utopian, nor am I a dystopian — I’m actually the boring middle. The boring middle wants to apply a much more pragmatic and scientific lens to this.

Of course, any tool in the hands of the wrong mindset would worry me. Since the dawn of human civilization: Fire was such a critical invention for our species, yet using fire to harm people is massively bad. So any wrong use of AI worries me. The wrong way to communicate with the public worries me, because I do feel there’s a lot of anxiety. 20

20 At Stanford’s Institute for Human-Centered AI, Li is surrounded by people who, like her, want to use AI for the public good. But I wonder if her vision of the future is too reliant on power being held by responsible actors, who combine commercial interests in AI with policy and academic work.

The one worry I have is our teachers. My own personal experience tells me they’re the backbone of our society. They are so critical for educating our future generations. Are we having the right communication with them? Are we bringing them along? Are our teachers using AI tools to superpower their profession, or helping our children to use AI?

This is the Bloomberg Weekend Interview, so we’re always interested in people’s lives as well as their work. The life you’re living today is so different from the way that you grew up, working in your parents’ dry cleaning shop. Are you conscious of the power that you have as a leader in this industry?

[Li laughs.] I still do a lot of laundry at home.

I am conscious of my responsibility. I’m one of the people who brought this technology to our world. I’m so privileged to be working at one of the best universities in the world, educating tomorrow’s leaders, doing cutting-edge research.

I’m conscious of being an entrepreneur and a CEO of one of the most exciting startups in the gen AI world. So everything I do has a consequence, and that’s a responsibility I shoulder.

And I take that very seriously because, I keep telling people: In the age of AI, the agency should be within humans. The agency is not the machines, it’s ours.

My agency is to create exciting technology and to use it responsibly.

And the humans in your life, your children, what do you not let them do with AI, or with their devices or on the internet?

It’s the timeless advice: Don’t do stupid things with your tools.

You have to think about why you’re using the tool and how you’re using it. It could be as simple as, Don’t be lazy just because there is AI. If you want to understand math, maybe large language models can give you the answer, but that is not the way to learn. Ask the right questions.

The other side is, Don’t use it to do bad things. For example, integrity of information: fake images, fake voices, fake texts. These are issues of AI as well as our social media-driven communication in society.

You’re calling for old-fashioned values, amid this world of new developments we couldn’t have imagined even three years ago.

You can call it old-fashioned, you can call it timeless. As an educator, as a mom, I think there are some human values that are timeless and we need to recognize that.


Mishal Husain is Editor at Large for Bloomberg Weekend.

More On Bloomberg

Read the whole story
bogorad
9 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

How Google Finally Leapfrogged Rivals With New Gemini Rollout - WSJ

1 Share
  • Gemini 3 Release: Google's third version of Gemini large language model launched this week, surpassing ChatGPT and competitors in industry-benchmark tests.
  • Employee Validation: Google staff conducted personal tests like jokes and math problems, confirming the model's capability to lead in the LLM field.
  • Vibe Checks: Tulsee Doshi tested Gemini 3 by writing in Gujarati, yielding better results than prior models, described as signs of life.
  • Early Access Evaluations: Aaron Levie of Box accessed Gemini 3 early, finding double-digit improvements in analyzing complex documents.
  • AI Race Leadership: Gemini 3 outperformed rivals on over a dozen benchmarks, positioning Google ahead of OpenAI and Anthropic.
  • Strategic Overhaul: Google revamped AI strategy by breaking silos, streamlining leadership, and involving co-founder Sergey Brin in development.
  • Product Integration: Gemini 3 powers a new version of Nano Banana image-generation tool and integrates into AI Mode for search.
  • Benchmark Success: Gemini 3 excelled in tests for knowledge, logic, math, image recognition, and Vending Bench for planning and tool use.

By

Katherine Blunt

Nov. 22, 2025 5:30 am ET

50


Sundar Pichai, CEO of Alphabet, speaks at a Google I/O event.

Google CEO Sundar Pichai has worked to overhaul the company’s AI development strategy. Jeff Chiu/AP

Call it America’s next top model. 

With the release of its third version this week, Google’s Gemini large language model surged past ChatGPT and other competitors to become the most capable AI chatbot, as determined by consensus industry-benchmark tests.

The results represent public validation for Google employees who, for months, have been conducting their own, personal tests of the model—asking it for jokes, trying to stump it with math problems—and coming away convinced they had something that would finally tilt the LLM field in the company’s favor. 

For one of her “vibe checks,” Tulsee Doshi, Gemini’s senior director of product management, asked the model to write in Gujarati, a language that is spoken widely in India but isn’t especially prevalent on the internet. The results were far better than what she had gotten from earlier models.

“I call it signs of life, right?” she said. “People were coming back and saying, ‘I feel it, I think we’ve hit on something.’”

Tulsee Doshi presenting on stage at Google I/O with

Tulsee Doshi, Gemini’s senior director of product management, conducted ‘vibe checks’ on the model. camille cohen/AFP/Getty Images

Aaron Levie, chief executive of the cloud content management company Box, got early access to Gemini 3 late last week, several days ahead of the launch. The company ran its own evaluations of the model over the weekend to see how well it could analyze large sets of complex documents.

“At first we kind of had to squint and be like, ‘OK, did we do something wrong in our eval?’ because the jump was so big,” he said. “But every time we tested it, it came out double-digit points ahead.”

The launch of Gemini 3 has handed Google an elusive victory: The company, for the first time in years, has pulled well ahead in the race to develop artificial intelligence.

The release of its latest AI model this week dazzled users who praised its intelligence, accuracy and creative capabilities. On Thursday, the company said Gemini 3 would power a new version of Nano Banana, a popular image-generation tool that has already driven rapid growth in Gemini usage this year.

The success of the new model poses a significant challenge to OpenAI, Anthropic and other startups vying for AI dominance. Gemini 3 outperformed competing models on more than a dozen benchmark tests scoring a range of intelligence categories.

“They are AI winners, that’s pretty clear,” said MoffettNathanson analyst Michael Nathanson. “I feel pretty good about their hand right now.”

OpenAI’s ChatGPT is still by far the most popular AI chatbot. The company said this month it now has 800 million users each week, compared with Gemini’s 650 million monthly users. And Anthropic’s Claude is widely regarded as one of the leading models for coding. But Gemini 3’s advances have the potential to cement it as a preferred tool for a diverse set of tasks, users and analysts say.

Google has been scrambling to get an edge in the AI race since the launch of ChatGPT three years ago, which stoked fears among investors that the company’s iconic search engine would lose significant traffic to chatbots. The company struggled for months to get traction. 

Chief Executive Sundar Pichai and other executives have since worked to overhaul the company’s AI development strategy by breaking down internal silos, streamlining leadership and consolidating work on its models, employees say. Sergey Brin, one of Google’s co-founders, resumed a day-to-day role at the company helping to oversee its AI-development efforts.

During its annual developers conference in May, Google unveiled a suite of sophisticated AI products and a revamped version of its classic search engine featuring AI Mode, which answers search queries in a chatbot-style conversation. It helped some investors gain confidence that the company was mounting a comeback, Nathanson said, but the share price still languished this summer.

“Wall Street was debating whether these guys were going to be AI roadkill,” he said.

An attendee trying out Google's AI with Gemini during the I/O developers conference.

Attendees tried activities that highlight Google's use of AI with Gemini during Google's annual I/O developers conference earlier this year. camille cohen/AFP/Getty Images

Then, in August, the debut of Nano Banana launched Gemini usage on its fastest trajectory to date. Gemini’s 650 million monthly users jumped from 450 million in July.

The company also notched a significant victory in September, when a federal judge declined to levy harsh penalties against the company after earlier finding it maintained an illegal monopoly in the search market. The judge said the competitive dynamics of the market were changing already, largely because of AI.

Google parent Alphabet GOOGL 3.53%increase; green up pointing triangle reported record quarterly revenue last month, largely because of growth in cloud computing and advertising. Its shares are up more than 50% this year and more than 60% since the summer. Its market value this week hit $3.6 trillion, surpassing that of Microsoft for the first time in seven years.

Google aimed to develop Gemini 3 to succeed in some of the most challenging areas of artificial intelligence. The company’s engineers and researchers wanted to improve this model’s ability to “see,” analyze and generate all means of content—text, images, audio, video and code. And they wanted to improve its capacity for thought and reasoning in the interest of building a better personal assistant in coding and other tasks. 

After the launch, a table showing Gemini 3’s score on 20 benchmark tests circulated widely online. The model significantly outscored the latest ones from ChatGPT and Anthropic on tests involving expert-level knowledge, logic puzzles, math problems and image recognition. It took second place to Anthropic’s Claude Sonnet 4.5 on a single benchmark involving coding.

Google ran some of the tests internally. The rest were done by other companies. Employees spent the weekend before launch in anticipation of getting scores back, some of which were far higher than anticipated.

Doshi said the best surprise was Gemini 3’s high performance on an evaluation called Vending Bench, which tests a model’s ability to think and act over time by asking it to operate a vending machine. The model must track inventory, place orders and set prices in order to make money in the simulation. 

SHARE YOUR THOUGHTS

How do you think Gemini compares with other AI models? Join the conversation below.

“Vending Bench reflects one of the things we were hoping to really shift with this model and push on, which is improved tool use and improved planning,” she said. 

As part of the launch, Google began offering subscribers the opportunity to use Gemini 3 in AI Mode, the first time the company integrated a new model into search on day one. The company has plans to make it available soon for everyone in the U.S.

Robby Stein, vice president of product for search, said he worked for months with the Gemini team to determine how the new model could improve the delivery of search-engine results. For one of his vibe checks, he used AI mode to ask for help explaining to his 7-year-old son the concept of lift force on an airplane.

He expected a written response. The result: an interactive simulation that showed currents moving over a wing, with a slider allowing him to move the wing, change the currents and lift the plane into the air.

“I was like, ‘Wow, this actually can be capable of presenting information in the best way given the question,’” he said. “That was my main ‘aha’ moment with this product.”

Write to Katherine Blunt at katherine.blunt@wsj.com

Copyright ©2025 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8


What to Read Next

[

The Score: Nvidia, Constellation, Eli Lilly, Gap and More Stocks That Defined the Week

](https://www.wsj.com/finance/stocks/nvidia-constellation-eli-lilly-gap-target-lowes-home-depot-stock-1916e192?mod=WTRN_pos1)

[

The Score is a weekly review of the biggest stock moves and the news that drove them.

](https://www.wsj.com/finance/stocks/nvidia-constellation-eli-lilly-gap-target-lowes-home-depot-stock-1916e192?mod=WTRN_pos1)

Continue To Article


[

Tech, Media & Telecom Roundup: Market Talk

](https://www.wsj.com/business/tech-media-telecom-roundup-market-talk-1fd1f542?mod=WTRN_pos2)

[

Find insight on Intuit’s Credit Karma, Palo Alto Networks and Chronosphere and more in the latest Market Talks covering Technology, Media and Telecom.

](https://www.wsj.com/business/tech-media-telecom-roundup-market-talk-1fd1f542?mod=WTRN_pos2)

Continue To Article


[

China’s Moonshot AI Raising Fresh Funds That Could Value It at About $4 Billion

](https://www.wsj.com/tech/ai/chinas-moonshot-ai-raising-fresh-funds-that-could-value-it-at-about-4-billion-20562245?mod=WTRN_pos4)

[

The Beijing-based AI startup is in talks with global investors, including investment firm IDG Capital, for the fundraise, which could total several hundred million dollars, some of the people said

](https://www.wsj.com/tech/ai/chinas-moonshot-ai-raising-fresh-funds-that-could-value-it-at-about-4-billion-20562245?mod=WTRN_pos4)

Continue To Article


[

Tech, Media & Telecom Roundup: Market Talk

](https://www.wsj.com/tech/tech-media-telecom-roundup-market-talk-8f39d5fa?mod=WTRN_pos5)

[

Find insight on Palo Alto Networks, Nvidia and more in the latest Market Talks covering Technology, Media and Telecom.

](https://www.wsj.com/tech/tech-media-telecom-roundup-market-talk-8f39d5fa?mod=WTRN_pos5)

Continue To Article


[

Tech, Media & Telecom Roundup: Market Talk

](https://www.wsj.com/articles/tech-media-telecom-roundup-market-talk-4f71a14d?mod=WTRN_pos6)

[

Find insight on Nvidia, EU’s proposed tech regulations and more in the latest Market Talks covering Technology, Media and Telecom.

](https://www.wsj.com/articles/tech-media-telecom-roundup-market-talk-4f71a14d?mod=WTRN_pos6)

Continue To Article


[

Meta Defeats FTC’s Antitrust Case Alleging Social-Media Monopoly

](https://www.wsj.com/us-news/law/meta-defeats-ftcs-antitrust-case-alleging-social-media-monopoly-504b2323?mod=WTRN_pos7)

[

The ruling denies the agency’s attempt to force the Facebook parent to unwind its Instagram and WhatsApp acquisitions.

](https://www.wsj.com/us-news/law/meta-defeats-ftcs-antitrust-case-alleging-social-media-monopoly-504b2323?mod=WTRN_pos7)

Continue To Article


[

From price cuts to bidding wars, America’s home buyers are in two wildly different worlds

](https://www.marketwatch.com/story/from-price-cuts-to-bidding-wars-americas-home-buyers-are-in-two-wildly-different-worlds-6e79fc47?mod=WTRN_pos8)

[

As home sales slowly recover, the Northeast and Midwest are seeing a bigger rebound than the South and the West, according to new data.

](https://www.marketwatch.com/story/from-price-cuts-to-bidding-wars-americas-home-buyers-are-in-two-wildly-different-worlds-6e79fc47?mod=WTRN_pos8)

Continue To Article


[

Why Some ‘Big Money’ Chooses Breckenridge, Colorado, Over Aspen and Vail

](https://www.mansionglobal.com/articles/why-some-big-money-chooses-breckenridge-colorado-over-aspen-and-vail-a32c6f9c?mod=WTRN_pos9)

[

The less expensive ski town about 80 miles west of Denver attracts a kind of low-key wealth that has fueled steady price growth among its luxury ski homes

](https://www.mansionglobal.com/articles/why-some-big-money-chooses-breckenridge-colorado-over-aspen-and-vail-a32c6f9c?mod=WTRN_pos9)

Continue To Article



Videos

Read the whole story
bogorad
11 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Chesterton's Fence: Explained

1 Share
  • Chesterton's Principle: G.K. Chesterton advocated not removing fences without understanding their original purpose to avoid unintended consequences.
  • Modern Reformer Critique: Quick reformers ignore established structures, risking disruption, as illustrated by the fence analogy.
  • Neighborhood Fence Example: An old, rotting fence might demarcate property or prevent hazards, justifying investigation before removal.
  • Societal Traditions as Fences: Customs and traditions serve hidden functions, and dismissing them shows arrogance without comprehension.
  • Unforeseen Problems Risk: Removing structures without inquiry can create new issues, emphasizing the need to anticipate overlooked threats.
  • Intellectual Humility Benefit: Approaching traditions with humility acknowledges personal knowledge limits and avoids chronological snobbery.
  • Progress Through Understanding: Investigating purposes enables better reforms that preserve core functions while addressing evolving needs.
  • Mindful Reform Advice: Pause to question traditions' origins before changing them, building progress on historical foundations.

TL;DR / Summary

G.K. Chesterton was an early 20th century English writer known for his clever paradoxes.

He once wrote: “There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, ‘I don’t see the use of this; let us clear it away.’ To which the more intelligent type of reformer will do well to answer: ‘If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.’”

In other words, don’t be so quick to tear down things you don’t understand. That fence may have been put up for a very good reason, even if that reason is not immediately obvious. To ignore that reality risks unintended and potentially negative consequences.

Don't Tear Down That Fence!

We've all seen that old, dilapidated fence in the neighborhood. The wood is rotting, pieces are missing, and it seems to serve no real purpose. Our first instinct may be to tear it down and get rid of an eyesore. But we would be wise to heed the advice of G.K. Chesterton and first understand why the fence exists before dismantling it.

Chesterton urged us to respect fences and traditions, even - or perhaps especially - when we don't grasp their purpose. His maxim goes:

"Don't ever take a fence down until you know the reason it was put up."

This principle has come to be known as Chesterton's fence. And it can save us from rash decisions with unintended consequences. Let's explore three reasons why.

We Risk Unforeseen Problems

That decaying fence in the neighborhood seems totally pointless. But before tearing it down, we should investigate if it was erected for a good reason. Perhaps it demarks a property line. Or prevents people from walking into a dangerous area. If we remove things without first understanding why they exist, we risk creating new issues.

As an analogy, think of societal customs and traditions as fences. Calling them useless hindrances is arrogant when we don't fully comprehend the functions they serve. It’s easy to see external forms and knee-jerk into removing them whilst missing deeper meaning.

Ask yourself: is it possible this fence protects against threats I’m failing to anticipate? As visionary CEO Astro Teller wisely said: “Ask yourself what you could do to break this thing, even when everyone else thinks you’re crazy.” We tear things down at our peril when we believe we know better without sincerely trying to understand them first.

It Can Display Intellectual Humility

Chesterton’s fence teaches us intellectual humility. When we encounter traditions and societal structures that appear flawed or archaic, it displays wisdom to first assume we’re missing something important.

We may eventually decide the fence needs renovating. But beginning from a humble place of “I don’t fully understand this” protects us from chronological snobbery. It stops us trumpeting today’s knowledge as intrinsically superior whilst not comprehending the failures of past systems were splattered across history’s sidewalks.

Making an earnest effort to grasp why fences exist needn’t imply full agreement with their purpose. It simply indicates respect, self-awareness of intellectual limits and appreciation that most customs arise for valid reasons, as fulfilling needs. As pioneering philosopher Karl Popper wrote, "We are not students of some subject matter but students of problems. And problems may cut right across the borders of any subject matter or discipline." True wisdom knows its knowledge has limits.

It Can Spark Needed Progress

Investigating the “why” behind fences often leads to crucial revelations. Yes, we may still replace the fence with a better alternative serving its core function. But by first understanding its deeper purpose, we avoid merely erecting more barriers against progress.

This humble questioning can ignite reform by unveiling what fences protect even whilst showing where they fail to fulfil society’s developing needs. As visionary abolitionist Frederick Douglass wrote, “Knowledge makes a man unfit to be a slave.” We can only upgrade beyond current fences when comprehending what essential needs they aim to meet.

Progress arises from respectfully understanding fences before changing them. Thoughtless removal leaves people feeling threatened rather than liberated. But mindful reform means replacing fences with better alternatives still fulfilling their core protective purposes. We show maturity by building upon what came before whilst creating space for continuing growth.

The next time you encounter an apparently outdated tradition, pause and compassionately ask why it first arose before demanding its demolition. Progress stands on the shoulders of the past, not by turning backs upon it. Our shared future relies on each of us displaying humility in questioning the reasons behind the fences built by those who came before us. What fences in your life and work could do with kinder, wiser questioning today?

Read the whole story
bogorad
14 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories