Strategic Initiatives
12208 stories
·
45 followers

On Dwarkesh Patel's Podcast With Nvidia CEO Jensen Huang

1 Share
  • Business Interview Overview: The conversation covers Nvidia’s corporate strategy, including chip allocation, competitive moats, and the company's past failure to secure an early investment in Anthropic.
  • Export Controls Dispute: The discussion turns into a heated debate regarding AI chip sales to China, with the CEO arguing for market access while the interviewer challenges the national security implications.
  • Questionable Logic: The CEO relies on contradictory claims, simultaneously asserting that China can easily bypass restrictions while insisting that American chip dominance must be maintained through continued Chinese sales.
  • Prioritization Of Profits: The arguments presented by the CEO suggest that the primary motivation for advocating against trade restrictions is long-term market share for the company rather than alignment with national interests.
  • Lack Of Strategic Alignment: The CEO appears to dismiss the systemic importance of AGI and superintelligence, signaling a disconnect between current corporate operations and the long-term geopolitical impacts of AI development.

Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was one of those. So here we go.

[

](https://substackcdn.com/image/fetch/$s_!mbEh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69fcfbba-631a-4ca4-bdc4-f27ff51711c5_1853x933.png)

As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary. Some points are dropped.

If I am quoting directly I use quote marks, otherwise assume paraphrases.

As with the last podcast I covered, Dwarkesh Patel’s 2026 interview with Elon Musk, we have a CEO who is doubtless talking his agenda and book, and has proven to be an unreliable narrator. Thus we must consider the relevant rules of bounded distrust.

Elon Musk is a special case where in some ways he is full of technical insights and unique valuable takes, and in other ways he just says things that aren’t true, often that he knows are not true, makes predicts markets then price at essentially 0%, and also provides absurd numbers and timelines.

Jensen Huang is not like that, and in the past has followed more traditional bounded distrust rules. He’ll make self-serving Obvious Nonsense arguments and use aggressive framing, but not make provably false factual claims or absurd predictions. I think he mostly stuck to this in the interview here, but there are some whoppers that seem to be at least skirting the line.

I do not worry for Jensen Huang, only about him.

For full disclosure: I am a direct shareholder of Nvidia. I am long.

[Scheduling note: Weekly AI post will be tomorrow 4/17, with ‘knowledge cutoff’ at the release of Opus 4.7. Coverage of Opus 4.7 begins on Monday.]

Podcast Overview Part 1: Ordinary Business Interview

This was essentially an interview in two parts.

The first half, until about 57 minutes in, and I would also include the last few questions at the end in this, is about ordinary business questions. Why and how is Nvidia making these choices, these investments, these allocations of chips? Where is Nvidia’s moat? How do they think about these questions?

In these questions, there’s no doubt Jensen is talking his book and about how Nvidia is great. That’s what CEOs do, and maybe it’s a little thick, but aside from one stray swipe at so-called ‘doomers’ it’s fair play.

Jensen downplays TPUs as less flexible than GPUs, including that they lack CUDA, saying this will also matter for different AI architectures. I don’t buy that the edge matters so much for a large portion of business.

His explanation of how Nvidia allocates its chips seems disingenuous, and I do not centrally believe his account of this, but that’s the way such things go.

The most interesting part of the first half were his comments about Anthropic, and in particular how Anthropic ended up primarily training and running on Tritium and TPUs.

Jensen has nothing but good things to say about Anthropic, and he takes responsibility for letting this slip through his fingers and vows not to let it happen again. He figured Anthropic would get ordinary VC funding, because he did not understand the extent of their compute needs. Thus, in the early days, Google and Amazon invested and got Anthropic locked into those alternative chip ecosystems. He was happy to invest later, but Anthropic had already done a ton of work integrating and working with the other chips.

Jensen lost out on Anthropic partly because at the time he lacked the free cash, but mostly because he was insufficiently scaling pilled and AGI pilled. He understands this now, but he has not updated sufficiently. He still remains not very pilled, in any sense, on what is to come. He claims he can scale up his whole supply chain as much as he wants with a few years of notice, but keeps not scaling it up sufficiently. There will be power as a new potential limiting factor for chip sales within a few years but that wasn’t that importantly true before.

There is no hint, in this first half, that he thinks he is running anything other than an ordinary computer hardware business, except one scaling uniquely large and fast and profitably.

Podcast Overview Part 2: A Debate About Chip Exports

The half of the interview everyone is talking about is the second half, where they argue, often quite heatedly, about AI chip exports to China. Jensen of course wants to sell his chips to China, and Dwarkesh argues that we should not do this, while presenting this as a devil’s advocate position. My read is he mostly believes the things he is arguing, albeit with some uncertainty.

This is a high difficulty interview. Dwarkesh does a great job of engaging and not being afraid to push back. A bunch of it goes around in circles at times, but that seems unavoidable, and also was often revealing in its own way. Kudos for pushing.

Jensen tries to have many things both ways. His chips are way better, but China has all the chip manufacturing capability it needs, but it has unlimited energy with would-be data centers fully powered and sitting empty, but they can just use more worse chips, but America is so far ahead we shouldn’t worry about a few chip sales, but if we don’t sell those chips then we cede the world’s second largest market, and you both can and can’t switch model architectures, our sales would both not impact China’s compute access and be the difference between them staying on CUDA or not, and so on.

The biggest thing is that he repeatedly makes clear what he cares about.

What matters is Nvidia selling chips to China. That’s it. Nothing else matters. That keeps Nvidia and CUDA dominant, and what’s good for Nvidia is good for America, because if anything is built on his chips then that’s ‘good news’ and we win, whereas if it’s built on someone else’s chips, then that is ‘bad news’ and we lose.

This does not actually make any sense whatsoever. Whose chip is running the model and application is not the important thing and this should be very easy to see. But also there is no real competition in chip sales and won’t be for a long time, as everyone is compute limited and Chinese capacity to produce even much worse chips is severely limited.

By Jensen’s arguments, we’re sacrificing his layers of the ‘five layer cake’ that is AI to benefit the model layer and it is not fair, and it’s bad for America, because it means our ‘tech stack’ won’t win, and what matters is this mystical ‘stack’ that is actually code for the chips themselves.

Even if AI was going to indefinitely remain a ‘normal technology’ and ‘mere tool,’ and all we were dealing with was mundane AI, this would be wrong until at least such time as Nvidia can saturate market demand. Every chip made and sold to China is a chip not made and sold to America. Even after that, compute access will be key to economic productivity and technological advancement and also national security, even in these normal worlds.

If you understand that superintelligence is likely coming, and that everything is going to change and likely do so relatively soon, then the situation becomes overwhelming.

Especially poor was Jensen’s answer to the problem of cybersecurity and Mythos, which was that we need to have a dialogue with the Chinese and get them to agree to not use AI for bad purposes, presumably including cyberattacks.

I very much support entering dialogues with China about AI, and agreeing on things not to be doing, but in this situation that is obviously and hopelessly both naive and physically non-viable. The Chinese have a long history of doing such things after agreeing not to do them, so what is the verification method once we allow them to have the capability to do it?

Are you going to require them to heavily restrict and monitor all API calls? Cause that’s kind of the bare minimum, even if they can be trusted to want to stop doing it. It’s actually a lot easier to not develop the capabilities in the first place, but either way you need to lay foundations first, this takes time, and we have not done that.

Thus, yes, there is a huge divide here, where Jensen remains legitimately unpilled on the ideas of AGI and superintelligence, and doesn’t understand the thing his company is enabling to be brought into existence.

But also, even if Jensen were right about that, he would still be wrong otherwise, given the things we already know are possible. We are simply past the point where ‘AI as such a normal technology that you should just sell China to chips’ is a viable argument. We know it isn’t true.

Jensen only wants one thing, and it’s not disgusting but I also want other things.

[

](https://substackcdn.com/image/fetch/$s_!9NzC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9beb1293-d579-4d93-a556-33ea2f34027e_900x591.jpeg)

I’ll cover reactions at the end of this post, once we have proper context.

What Is Nvidia’s Moat?

  1. Nvidia makes software that TSMC and others use to make hardware, but why wouldn’t that get commoditized the same as other software? Jensen has been asked this one a lot, and responds with what is clearly a well-rehearsed speech. He gives three real answers: Demand is going to go up up up, software companies in general will thrive with tool use, and Nvidia’s particular task is extremely hard.

    1. Demand is definitely going to go up. That’s not in dispute.

    2. Software companies thrive with tool use if and only if they can continue to provide a unique product that is superior to the competition, and especially superior to new entrants and homebrews in valuable ways. It is not obvious which way this goes and he isn’t offering an argument.

    3. Nvidia’s task is indeed extremely difficult. The ability to use limited TSMC capacity to create modestly more powerful chips will only get more valuable, even if the competition could do a pretty good job. But this isn’t an argument for why in Glorious AI Future rivals can’t design chips that are as good.

    4. I still am happy (not investment advice!) to be long Nvidia, but this doesn’t show us much of a moat yet.

  2. Nvidia has ~$100 billion in purchase commitments, and soon will have $250 billion, locking up scarce components. Is that Nvidia’s moat. Jensen says they make big commitments, including getting other companies to make big investments by showing the future size of the market, which he spends a lot of time doing. They have the supply chains and the cash flow and the churn.

    1. I buy that all of that is great investments that help a lot, and that new competition would struggle with various parts of this.

    2. I don’t think this would be a sustainable moat over time, if there was serious competition, but in the medium term it’s a big edge.

  3. Can Nvidia keep doubling revenue and tripling flops provided year after year, or are we hitting capacity walls such as at TSMC? Jensen notes anything can be a bottleneck, the hardest is actually electricians and plumbers, but they’re scaling the hell out of everything, and all the bottlenecks get attention. Any given bottleneck can be scaled within two or three years given a demand signal.

  4. He wants to ‘reindustrialize the United States.’ He needs energy, but the other stuff is all 2-3 year problems.

  5. “This is one of the concerns that I have about the doomers describing the end of work and killing of jobs. If we discourage people from being software engineers, we’re going to run out of software engineers. The same prediction happened ten years ago. Some of the doomers were telling people, “Whatever you do, don’t be a radiologist.” You might hear some of those videos still on the web saying radiology is going to be the first career to go and the world is not going to need any more radiologists. Guess what we’re short of? Radiologists.”

    1. This is very clearly a case of ‘doomer’ being used as a slur in order to dismiss anyone concerned about any negative impact of AI via association and vibes.

    2. This also has some valid points, but it is incoherent. We must unpack.

    3. There are two kinds of They Took Our Jobs concerns, which this conflates.

    4. First is the ‘end of work’ in general and killing jobs in general, and worries about mass unemployment and declining wages. He says he is addressing this concern at first, but then pivots and doesn’t actually talk about it. As I’ve said before, I think some are too concerned about this, but with sufficient capability this becomes a big worry.

    5. What he mostly discusses is predictions that particular jobs will suffer from local technological unemployment.

    6. There are clearly some cases where this is true for AI today. If you told someone ten years ago to become a translator, you did them dirty.

    7. Radiologists were an interesting case, often discussed. Those warning about this were right that AI would be superhuman at analyzing images.

    8. But this caused an increase in demand for radiology, and AI can’t replace many other parts of the job, and because in the longer run radiology is going to be increasingly automated and doctors have 40 year careers, many opted out of radiology.

    9. So for now, in 2026, we have a shortage, and radiologists earn a lot. However, in the longer run, it seems likely demand for radiologists will decline as a percentage of demand for doctors. Standard economic theory says that this means we should currently have a shortage of radiologists.

    10. Thus, it’s not clear the shortage is even inefficient. But to the extent that it is and we made a collective mistake, it was that it was a specific bad prediction about this particular profession, which has a many years lag in training.

    11. Moving on to software engineers, we should worry less here about both errors, because especially now with agentic coding the supply of coders is elastic. You can get going relatively quickly. My guess is we will want more engineers for a while, not less.

    12. This shouldn’t be a ‘doomer’ or ‘decel’ versus ‘optimist’ or ‘accelerationist’ thing. This is an allocation problem, where you have to be forward looking, and you do the best you can, and who is right about the big picture does not have that much say in who is right about the specific choices.

TPU vs. GPU

  1. TPUs trained Claude and Gemini. What does it mean? Time for another speech. Jensen pitches TPUs as a narrow product whereas GPUs accelerate all sorts of computing, so they have much wider market reach. You can do it yourself or rent, and do things TPUs can’t.

    1. This raises the question of why compute is not more fungible. xAI, which Jensen mentions, has these huge arrays of GPUs, but no one wants their inference, so why aren’t they renting out that capacity? Or are they?

    2. I buy that GPUs have lots of applications TPUs can’t touch. I won’t be using a TPU to power my monitors, after all.

    3. But if AI is the dominant reason to want compute going forward, and TPUs are fungible there with GPUs, then won’t TPUs end up competitive for a large portion of the space?

    4. Jensen’s arguments didn’t address this, and it was the central implied question, so Dwarkesh asks more explicitly.

  2. Dwarkesh points out the $60 billion in profits per quarter for Nvidia is mostly from AI, not quantum and pharma. With that, why do you need the flexibility of a GPU? Jensen says, sure matrix multiplication, but you might want to use other techniques as well. He brags about getting 50x energy efficiency with Blackwells over Hoppers. MoEs are one such innovation.

    1. Jensen is saying 50x more efficient per unit of compute, or for the same software task, not for chip versus chip. Power is still a limiting factor.

    2. MoEs were invented by Google on TPUs, so clearly they can do MoEs, although if you are not Google or Anthropic you might need CUDA.

    3. Google could close most or all of this effective gap if it cared to open source its own internal TPU kernel libraries, but they don’t want to do that, and would rather try to use their TPUs to win in AI rather than selling chips.

    4. Is Google right about that? Unclear, but selling a ton to Anthropic is a weird middle path that likely reflects infighting between Cloud and DeepMind.

    5. The point being, Nvidia’s moat against Google in AI chips is… Google, mostly?

  3. 60% of Nvidia revenue is from the big five hyperscalers. Do they need CUDA? OpenAI has Triton, Anthropic and Google run their own accelerators. Jensen gives the ‘happy to help with all frameworks’ and also the ‘CUDA is super flexible with a huge install base and every cloud provider’ talking points.

    1. Okay, sure, nothing surprising but solid.
  4. Do those advantages matter to the important customers, though, enough to protect +70% margins? Nvidia has lots of engineers optimizing everyone’s stacks, and we’re talking 2x improvement or more. He taunts TPU and Trianium for not getting measured via InferenceMAX, claims the supposed TPU 40% edge doesn’t make sense and is probably fake.

    1. Nvidia has real advantages but Jensen is overplaying his hand a bit here.
  5. Jensen says all this ‘competition’ is really Anthropic: “Anthropic is a unique instance, not a trend. Without Anthropic, why would there be any TPU growth at all? It’s 100% Anthropic. Without Anthropic, why would there be Trainium growth at all? It’s 100% Anthropic. I think that’s fairly well known and well understood. It’s not that there’s an abundance of ASIC opportunities. There’s only one Anthropic.” And OpenAI might be building Titan but they’re ‘vastly Nvidia.’

    1. The claim is basically ‘Anthropic is the weird exception, other AI companies would never, that’s the only reason those chips have meaningful sales.’

    2. Anthropic is proof of concept, but if you need long term investment and scale and even deep TPU familiarity on day one before it makes any sense, then maybe?

  6. Jensen basically blames Anthropic being on TPUs on his inability at the time to invest early on in Anthropic, whereas Google and AWS invested. He’s not going to make that mistake again.

    1. I don’t think this partly a ‘Amazon and Google bribed Anthropic to get their business’ but mostly a ‘Nvidia failed to bribe Anthropic.’
  7. Dwarkesh points out that with 70% margins, you can be a lot worse than Nvidia and still come out ahead if you roll your own. Jensen fires back that ASIC margins at places like Broadcom are similar, ~65%, anyway.

Why Isn’t Nvidia Hyperscaling?

  1. Jensen says Nvidia scaled as soon as they could have, and invested in the labs as soon as they could. There wasn’t enough cash and he figured the labs would raise from VCs. He’s happy Anthropic exists even though they raised from Google and Amazon.

    1. This is the trader’s lament. If it was a good trade, you should have done more, and you should have done it earlier.
  2. But what about now with all the piles of money? Why not be a cloud provider? That’s not Nvidia’s business or philosophy. If others can do it, you let them do it.

    1. People underestimate the importance of staying focused.

    2. I totally believe that Nvidia made the right call here, if you don’t think superintelligence is going to render anyone but the AI labs powerless. Invest in all the model companies, lock in as much business as possible, win no matter who wins, don’t try to be a model company or a cloud provider.

    3. If you do think only AI labs matter, a la Musk, then big mistake. Oh well.

  3. Why doesn’t Nvidia ‘pick winners’? Not their job. Let them fight it out.

    1. I would add, the competition helps Nvidia.

    2. Nvidia of course ‘picks winners’ in another sense, by choosing magnitudes of investments, and choosing valuations, and choosing allocations, it just tries to do so in a way that keeps the competition flowing.

    3. If Nvidia truly didn’t want to pick winners it would allocate fully via price.

  4. Nvidia ‘doesn’t want to be in the financing business,’ but of course they will help OpenAI with $30 billion when they need it, it’s a great investment. They don’t ‘just want to prop up neoclouds’ or hyperscalers or labs.

    1. They’re in the financing business. That’s the financing business.

    2. That said, I do not think Nvidia is doing it to prop up otherwise unpromising businesses or making bad investments. I think Nvidia is using it to secure business deals while also making otherwise good investments. Win-win-win.

    3. They are in the side of the financing business where first you have to prove that you don’t need the financing. Which is most of the financing business.

  5. Both agree: There is a shortage of GPUs.

    1. This will be important later.
  6. How does Nvidia divvy up scarce allocations of GPUs? First the customer has to place a purchase order. Then it is first in, first out. Larry and Elon (brought up unprompted) never begged for GPUs. It’s all just placing an order.

    1. I don’t have any insider information, but this smells like straight up bullshit.

    2. There was for a long time a truly massive GPU shortage, with demand many times the size of supply at Nvidia’s price points. Allocations were existential.

    3. If you were really just indifferent, you’d raise prices.

    4. If it was fully first in, first out, then the allocation pattern looks very different.

    5. Even if Jensen isn’t going to listen, of course Elon Musk was going to try whatever he could to beat the system same as everyone else, only more so. Maybe he didn’t ‘beg’ per se, but that is a classic Suspiciously Specific Denial.

    6. This is not consistent with the story he told about Anthropic.

  7. Why not highest bidder? “Because it’s a bad business practice. You set your price and then people decide to buy it or not. I understand that others in the chip industry change their prices when demand is higher, but we just don’t. That’s just never been a practice of ours. You can count on us. I prefer to be dependable, to be the foundation of the industry. You don’t need to second-guess. If I quoted you a price, we quoted you a price. That’s it. If demand goes through the roof, so be it.”

    1. Every economist is screaming right now.

    2. If you can’t be depended on to deliver product, you’re not dependable.

    3. If you are pure FIFO at a fixed too-low price, you often won’t deliver.

    4. Yes, I agree that if I quote you a price, that’s the price even if demand goes up. But Nvidia’s prices for years were lower than market clearing.

  8. They have a great relationship with TSMC. They fight, and sometimes there’s some ‘rough justice’ but you can count on TSMC to be there every year, and for Nvidia to be there with a new product every single year. Both of them can scale as high or low as you need, you just need to place an order.

    1. If this wasn’t true, he would still say the same thing.

    2. It is a weird situation, where both sides need each other and have to divide the profits, with a huge potential ZOPA, but yes ultimately they make it work.

    3. The giant piles of money? They help.

    4. It makes sense, given the explosive growth, for there to be somewhat of a shortage most of the time. Same ‘mistake’ as Anthropic.

Selling Chips To China

So far, they’re spent an hour asking Jensen standard business questions, and he’s provided mostly standard business answers. No one is talking about that hour.

This next part is the part everyone online is talking about. Export controls.

  1. Dwarkesh presents himself as devil’s advocate. He’ll take the anti-export side.

  2. What about Mythos? Wouldn’t Chinese companies being able to train something like Mythos, especially first, threaten American national security?

    1. I think we can all agree that would be a very scary scenario, as a specific example of us not wanting China to have the most capable models.

    2. There are other classes of cases for export controls as well.

  3. Jensen starts out by saying Mythos was trained on ‘a fairly mundane amount of fairly mundane capacity’ by an extraordinary company. China has such capacity. He ‘wants the United States to win,’ but don’t make them your enemy, they’re too capable, you see. “They manufacture 60% of the world’s mainstream chips, maybe more…. They have 50% of the world’s AI researchers.”

    1. Dwarkesh offers a distilled recap on this and the next few items here.

    2. Okay, look, functionally this is just straight up bullshit.

    3. Yes, technically the stats here are probably true, but he’s trying to say ‘China has all the chips it needs,’ which is false, and just as much talent as we do, which is false (number of researchers is a really dumb measure of talent and capability) and ‘therefore we can’t beat China, we have to make a deal.’

    4. He does not want the United States to win. Or at least he doesn’t much care. Indeed, he’s falsely saying that we can’t.

  4. He wants ‘research dialogue’ to make a deal on what not to use AI for.

    1. I do want a dialogue with China on AI safety issues. I strongly agree that it would be good for our researchers and AI people to be talking.

    2. Agreeing to put aside some uses of AI is good. The first step of ‘no AI in command of nuclear weapons’ is clearly good.

    3. Extending that to cyber attacks would also be good, but in what sense do we not already have an agreement to not do cyber attacks in the first place?

    4. And in what sense is China not blatantly ignoring that rule all the time?

    5. So why would we expect China to hold to such a deal if they had the AI capabilities to do the cyberattacks? Are you going to do real verification?

    6. Making a deal to not use AI for [X] is much harder than making a deal to ensure AI can’t do [X] in the first place. Making a broad general agreement can actually be much easier than making narrow agreements.

  5. Jensen talks about how open source and the startup ecosystem are vital to cybersecurity, that ‘the ecosystem needs open models’ to do the work, and that a lot of the cybersecurity work is coming out of China. He says “The idea that you’re going to have an AI agent running around with nobody watching after it is kind of insane.”

    1. There are AI agents running around with nobody watching after them.

    2. There are going to be a lot more of them. Deal with it.

    3. Is that insane? Mu. But it’s happening.

    4. As for the other stuff, it’s mostly non sequiturs, and it’s not the future of cybersecurity, and he certainly isn’t beating the rumors here.

  6. Dwarkesh pushes back. The Chinese chips are 7nm at best. They have 10% of the flops we have. Anthropic getting there first and getting to do Glasswing was kind of important. Once such a model is out there, amount of compute matters a lot. All the labs are bottlenecked on compute, both in America and in China.

    1. He seems straight up correct about all of this.
  7. Jensen responds that yes we should always be first and always have ‘more’ compute, but China has ‘enormous’ amounts of compute, the second largest market in the world, and they could aggregate that.

    1. Enormous is relative here.

    2. They could aggregate in theory, but they won’t for obvious reasons, and if they did then that would make the whole thing even scarier.

    3. Jensen keeps simultaneously saying ‘we have an edge in compute’ and also ‘but they have enough compute’ and also (later in the interview and also constantly all the time in general) ‘we should give away a large portion of that edge in exchange for me making more money.’

  8. Jensen goes to energy. China has all the energy. “Why can’t they put 4x, 10x as many chips together, because energy’s free? They have data centers, fully powered, sitting empty. The idea that China won’t be able to have AI chips is complete nonsense. Their capacity of building chips is one of the largest in the world. The semiconductor industry knows that they monopolize mainstream chips. They have over-capacity, they have too much capacity. So the idea that China won’t be able to have AI chips is completely nonsense.”

    1. This is honestly pretty embarrassing for Jensen.

    2. He’s trying to argue that the Chinese don’t need his product, shouldn’t even want his product, have all the chips they need, worse chips can do the same job totally fine, China is over capacity on chips.

    3. This is simply flat out false. It’s very obviously completely not true.

    4. I try not to say this kind of thing lightly, but yes, Obvious Nonsense.

    5. We know this because we see huge actual effective bottlenecks in compute for everything the Chinese are trying to do in AI.

    6. We know this because before controls China had about as much compute as we did, and now they have 10% as much compute as we do.

    7. If China can do all that and has all the SCIP they need and so on, then why are those data centers that are fully powered sitting empty? Why does no one have enough compute? Where are all the foreign Huawei data centers with all their extra chips they don’t even need, that Sacks told us were coming if we didn’t make the right deals? Come on, now.

  9. Jensen goes on to talk up Huawei. Biggest year in history. They shipped a ton of chips. Millions of chips. Way more chips than Anthropic has. They have plenty of logic, and they have plenty of HBM2 memory. They don’t need EUV for the most advanced HBM, they’re a networking company. Algorithmic improvements are what counts, anyway.

    1. Jensen is clearly flailing here. It reads like tilt, and anger, and desperation, and doubling down on a story that makes no sense. Throwing words at the wall and seeing what sticks, just deny deny deny.

    2. Would be funny to intercut this with when he’s talking up Nvidia.

  10. We get the ‘tech stack’ argument. “DeepSeek is not an inconsequential advance. The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation.” Dwarkesh flat out asks, why? Jensen says, suppose it is ‘optimized for Huawei.’ Our hardware would be at a disadvantage.

    1. I don’t even know what to say at this point. Why do we care which hardware that particular model is a little more efficient running on? It makes no difference. This is dumb. Nvidia is selling every chip it can make, will do so for a long time, and Jensen does not dispute this.

    2. Jensen is outright saying the bad outcome would be if Nvidia were put at a particular competitive disadvantage. That kind of gives the game away, and he’s about to give the game away a lot more.

    3. Oh, also this obsession with DeepSeek in particular continues, as Jensen sees it once again as an ‘important advance.’ This is tactical. DeepSeek is a good lab especially given its severe hardware limitations, but their big triumph is learning how to get remarkably far while being starved for compute, and they likely haven’t had the best Chinese model in some time, and as I’ve explained repeatedly the ‘DeepSeek moment’ was badly overhyped.

  11. “You described a situation that I perceive to be good news. A company developed software, developed an AI model, and it runs best on the American tech stack. I saw that as good news. You set it up as a premise that it was bad news. I’m going to give you the bad news, that AI models around the world are developed and they run best on non-American hardware. That is bad news for us.”

    1. Mic drop. QED. Rest my case. Mask off. Thank you, sir.

    2. As in, he thinks that if the Chinese develop the best model, so long as it runs best on his hardware, that’s good news. That’s a win.

    3. Whereas, if the Chinese develop a model that isn’t the best, but it runs better on Huawei chips than Nvidia chips, then that’s bad news. That’s a loss.

    4. All he cares about are Nvidia’s hardware sales. Stop pretending otherwise.

  12. Dwarkesh asks, can’t models just swap accelerators anyway? Jensen says no, and ‘I am the evidence.’ Nvidia’s success is perfect evidence. Dwarkesh points out people do it anyway, Jensen says they don’t run better. Anthropic’s models run on Trinium and TPUs, but ‘a lot of work has to go into that change.’

    1. “L'Preuve, c'est moi.”

    2. This is his whole attitude throughout. The authority has spoken, peasant.

    3. Yes, of course they don’t run better when you do a straight swap. Nvidia’s chips are better and yes there is some value in optimization. But the point is that in most cases the efficiency loss is moderate, so you use what you’ve got.

    4. Jensen didn’t address that Anthropic is running its same models on three distinct hardware architectures and it’s going fine. You do the work.

    5. The work is going to get easier because the AIs can do the work.

  13. “But go to the global south, go to the Middle East. Coming out of the box, if all of the AI models run best on somebody else’s tech stack, you’ve got to be arguing some ridiculous claim right now that that’s a good thing for the United States.”

    1. Okay, this is just tilt now. No one said that, on several levels. You mad bro?

    2. Let’s break it down.

    3. Where did ‘all of the AI models’ come from here? We’re discussing the possibility of some AI models being optimized for non-USA hardware. The most important models, likely the best models, would not be in that group.

    4. The ‘tech stack’ here no longer includes the AI model. The point of the ‘tech stack’ is that it includes both the hardware and the model, so this isn’t even a real tech stack.

    5. There is a huge difference between ‘runs most efficiently per use of compute on non-USA hardware’ versus ‘runs best on non-USA hardware.’ The Nvidia hardware is better than the non-USA hardware. So even if there is some substantial efficiency loss in the swap, you would still benefit from the large gap in performance.

    6. He is agreeing this only applies ‘out of the box’ without ‘doing the work,’ but in the future very obviously someone else will do the work and you’ll be able, with help from your AI, to benefit from them having done the work, if that work is valuable.

    7. Jensen just got done arguing that no, it doesn’t matter how good your chips are, you can just string together a lot more chips and everything is fine.

    8. It’s kind of funny that this response against the Nvidia CEO is largely me talking up Nvidia chips while he talks them down.

  14. “Why do you think it’s perfectly fungible, that if you didn’t ship them compute it would exactly be replaced by Huawei? They are behind, right? They have worse chips than you.” “It’s completely… There’s evidence right now. Their chip industry’s gigantic.”

    1. I don’t know how else to say this, except that Jensen is at best bullshitting.

    2. He’s saying that whether or not Nvidia sells compute to China will not impact how much compute China has. He argues even with this very minimal claim.

    3. At what point do we agree to acknowledge who and what this person is?

    4. It goes on like that for a while without saying anything new, until we get another moment.

  15. “Listen, why are you causing one layer of the AI industry to lose an entire market so that you could benefit another layer of the AI industry? There are five layers and every single layer has to succeed. The layer that has to succeed most is actually the AI applications. Why are you so fixated on that AI model? That one company? For what reason?”

    1. Any questions?

    2. Dwarkesh is perhaps partly at fault here, tactically, for not yet emphasizing the other reasons why one might want your country to have a lot more access to compute, especially compute priced well below fair market price, that go beyond the direct training costs.

    3. Dwarkesh is still the best interviewer out there, trying to play a very difficult hand. The witness is hostile and uncooperative and unreliable, and a lot of what he’s doing is actually getting two hours to talk to the witness and even argue directly with him without Jensen storming off. It’s a hard job, the same way it is a hard watch or read.

    4. Thus, even in the world where superintelligence and decisive strategic advantage are not things, he’s failing to understand that the economic impacts of AI are ultimately about who uses what inference for what purpose, in what quantities. Nvidia is the most valuable company in the world but even in the ‘AI as normal technology’ worlds ultimately his share of the profits is, in relative terms to the application and model layers, bubkas. The applications succeed because you have the models and compute you need to develop, deploy and run the applications.

    5. Very obviously, again even in ‘normal technology’ worlds, selling Nvidia chips to China doesn’t benefit the ‘American tech stack’ or help the rest of the layers of this supposed cake. It would run on Chinese energy, running Chinese models for Chinese applications. And then, if this really is a normal technology world, the Chinese chips, likely designed by AI using Nvidia chips, eventually replace the American ones once they catch up on that.

    6. The ‘one company’ thing is a bizarre thing to say, as if this is purely about Anthropic. It obviously isn’t, and Anthropic mostly doesn’t even use Nvidia chips. It simply happens that Mythos is the example of a capability advance. Very obviously the same logic applies to OpenAI, Google and xAI on one side, and the Chinese companies on the other.

  16. Dwarkesh tries again to talk about how much better Nvidia chips are, and how while China is struggling to scale 7nm Nvidia is moving on to 3nm and then 2nm or even 1.6nm. And he points out that ‘China has limitless energy’ is an argument that every chip you sell to China is that much more compute China has, since there is no other limiting factor.

    1. Good talk, and good attempt.
  17. “Listen, I just think you speak in absolutes. I think the United States ought to be ahead. The amount of compute in the United States is 100x more than anywhere else in the world. The United States ought to be ahead. Okay. The United States is ahead.” … “why is it that we don’t come up with a regulation that’s more balanced so that Nvidia can win around the world instead of giving up the world? Why would you want the United States to give up the world?” He says, as long as only America gets Vera Rubin, how is that not good enough?

    1. Jumping in this context to a claim of 100x is wild.

    2. Even 10x completely steamrollers Jensen’s earlier arguments about China not needing more compute, if you think about it.

    3. Again, all he cares about is his market share, and thinks this is ‘giving up the world.’

    4. This also implies that there is a set of fixed markets being competed for, rather than there being a fixed supply of chips where no one has enough.

    5. How about an obvious compromise, where if Nvidia can make enough chips to meet Western demand then we can talk about selling the rest? No, he strongly opposes that sort of thing as well.

  18. Jensen calls any comparison of AI to nukes or missile casings ‘lunacy.’ He calls comparing compute to uranium a ‘lousy’ and ‘illogical’ analogy. No argument is offered. When asked about the zero-day exploit issue, he says you solve that via ‘dialogues to make sure that people don’t use technology in that way.’

    1. He’s resorted to flat out name calling at this point.

    2. On the question of cybersecurity, I repeat my logic from above, and just want to emphasize how obviously naive this answer is. China releases its models open source. Even if you got China to agree not to do cyberattacks, and even if you got China’s government itself to not do the attacks themselves, and even if you got them to try and enforce this within China, then what? How are you going to enforce it? Even if you enforce it within China somehow, what happens when the North Koreans DGAF, since they obviously DGAF?

    3. Again, one cannot simply ‘agree not to use AI for [X],’ unless there are a highly limited number of actors who could do [X], such as when it involves existing nuclear weapons. You have to not permit that capability to exist in the first place, or at minimum you have to provide extensive monitoring of the relevant sources of that capability, worldwide. Please take this seriously, sir.

  19. He then comes back to saying “conceding the entire market is not going to allow the United States to win the technology race long-term in the chip layer, in the computing stack.”

    1. Dwarkesh is very much not making the argument that not selling chips to China is good for our long term ability to win the chip layer in particular.

    2. Jensen keeps emphasizing this because that is the only thing he cares about.

    3. Since Dwarkesh is not making the argument here, I suppose it is up to me to make the argument. So let’s do that, in two parts.

    4. First off, Nvidia can already sell every chip it can make, and also Huawei can also sell every chip it can make. If you sell a Nvidia chip in China, all that does is physically move that chip to China. If you do that often enough for long enough and scale up fast enough, then yes, that would change, but that’s not the situation.

    5. Thus, it is not obvious at all that selling chips to China would change the medium or long term chip situation at all, and it almost certainly would not impact anyone’s short term chip sales. Except insofar as Nvidia intentionally made chips intended only for Chinese consumption, instead of making chips for America.

    6. China highly values self-sufficiency on chips. I would value other things relatively more, but this is a very sensible thing for them to be caring about, and they are not about to let this go. They have also shown a willingness to restrict Nvidia sales inside China, towards this goal. Thus, we should conclude that if in the future Nvidia sales in China were threatening Huawei’s ability to make and sell more chips, that China would intervene to favor Huawei. To the extent Nvidia’s sales will matter here, they will be stopped. Even in this context, you only get to make sales that are a mistake.

    7. In the long run, a key limiting factor on everything is intelligence and compute, and the ability to solve various problems and create superior designs. Again, this is true even in the ‘AI as normal technology’ worlds that skeptics like Jensen say they expect. If you sell China a lot of chips, and they have better AI models and more compute with which to run them, they then use those better models more often to create better chip and EUV designs, the same way they advance everything else.

    8. Meanwhile, those sales supercharge China’s economy at the direct expense of our own, which also hurts our ability to do everything and helps theirs.

    9. So no, this is not ‘just a fact’ even on its direct level. It is highly plausible that holding back AI chips helps you in the long run market for AI chips.

  20. Dwarkesh engages on the narrow chips question, noting that Tesla and iPhones didn’t get lock-in in China. Jensen doubles down on ‘what matters is the richness of our ecosystem.’

    1. One could also cite numerous cases of technology transfer, reverse engineering and so on, as arguments against letting them get the chips.

    2. ‘Our’ here means Nvidia and CUDA. He doesn’t care about the models or applications or economic activity being Chinese, because that’s not ‘our.’

  21. He seems very insulted by the comparison to a car, Nvidia is not a car, you cannot ‘buy this car brand one day and use another car brand another day, easy.’

    1. Well, actually, yes you can, and people do. Not losslessly with no notice, but yeah, people do this all the time.
  22. The hits keep coming: “Conceding a marketplace based on the premise you described, I simply can’t acknowledge that. It makes no sense. Because I don’t think the United States is a loser. Our industry is not a loser. That losing proposition, that losing mindset, makes no sense to me.” “You don’t have to move on. I’m enjoying it.”

    1. “I simply can’t acknowledge that.” No, you can’t.

    2. “Is not a loser.” “That losing mindset.” Very telling. Someone hit a nerve.

    3. This man leads the most valuable company in the world, that sells out all of its products, and he’s terrified of being a ‘loser.’

    4. But he actively wants to keep this going, even when Dwarkesh realizes this is going in circles into tiltland.

  23. “And I just want you to acknowledge that any marginal sales for the American technology industry is beneficial.” “The logic that you use, you might as well say it to microprocessors and DRAMs. You might as well say it to electricity.”

    1. Jensen doesn’t want to understand this. He thinks ‘America sells thing’ should just be seen as good for America or the American tech industry.

    2. And he claims this is on the level of obvious, one could not argue with it.

    3. But very obviously this argument proves too much, and it is not made of gears. Why does this marginal sale net benefit the rest of the tech industry?

    4. Jensen’s argument seems to rely on us being ‘far enough’ ahead that it’s fine to give some of that back, while at other times he argues for the opposite.

    5. Yes, it is a correct default assumption that any given marginal sale is good, if you don’t have any other information. Here we do have a lot more information.

  24. “We have tons of compute. We have tons of AI researchers. We’re racing as fast as we can.”

    1. We could have more compte.

    2. We could have more AI researchers, if we had more compute and if we had more willingness to brain drain Chinese and other talent.

    3. Thus we are not racing as fast as we can.

    4. To be clear: This is not me saying we should race, or race as fast as we can.

  25. [More of the same arguments going back and forth, with Jensen continuing to say contradictory things and continuing to insist there is no contradiction.]

    1. Including saying American telecommunications industry was ‘policed’ out of basically the world, which is not a good word for what happened even if you buy the mercantilist thesis, nor is the situation a parallel.

    2. Dwarkesh says ‘I’m trying to make you understand the cost of selling the chips’ and Jensen responds by once again repeating what he sees as the cost of not selling the chips. As in, no, I’m not interested, sir.

  26. Jensen continues to think that the AI ‘application layer’ is the one that matters most, not the model layer.

    1. But even if that’s true, then that’s still a reason to hoard the compute.
  27. Jensen keeps talking about ‘losing the world’s second largest market’ for the entire tech stack. He seems to continuously claim: If the Chinese use CUDA and Nvidia, then our tech stack ‘wins’ the Chinese market in a meaningful sense for the model and application layers that matter most.

    1. And I’m here to say, no, this makes no sense, even in ‘normal technology’ worlds, it does not matter very much whose chips are being used if the models and applications are Chinese.

    2. I don’t understand why, other than Nvidia’s profits, this is hard to understand.

    3. Then again, that is exactly why we have a saying about it being difficult for a man to understand something in such scenarios.

    4. Actually I think Jensen understands perfectly well and is pretending not to.

  28. Jensen makes the good point that if we scare everyone in America into hating AI and away from doing software engineering, then that would not be good for us. He goes back to radiology, the difference between a job and a task.

    1. Making Americans hate mundane AI use, and fear the impact of mundane AI (or AI as normal technology) in our lives, to the point where Americans refuse to diffuse it and use it to our benefit, would indeed be a massive mistake. We should work quite hard to avoid this.

    2. America disliking mundane AI has, if you look at the data, very little to do with Americans fearing AI existential risk, or even the catastrophic risks that people like me worry about.

    3. Americans do worry about those things when prompted, and often unprompted, but this is low salience.

    4. What ordinary Americans mostly care about are things like job losses, the internet filling with slop or deepfakes, environmental impacts and so on. This is unfortunate, and I try to discourage it, but this is our reality. And this has very little to do with the question of export controls or AI existential risk.

    5. Who is discouraging people from being software engineers? I’m not sure. I think it is mostly people trying to think about the economics.

    6. Going back to the radiologist thing, I would refer back to my earlier analysis, and also note that the shortage is largely caused by regulatory capture, in that we require doctors, and in particular radiologists, to take performative actions. We could, if we wanted to, now train a lot more radiologist assistant practitioners, or whatever we wanted to call them, that could do the remaining parts of the job while relying on AI, if we decided to legalize this. And perhaps the shortage eventually speeds up when we do that.

    7. I don’t think any of this bears on the actual questions being asked by Dwarkesh, but they’re things relevant to our interests here, and when Jensen makes a good point I should highlight it, since I’m hammering him a lot.

  29. Jensen points out that lithography advances are maybe 75% improvement from Hopper to Blackwell, so the Nvidia architecture is most of the 50x total gains.

Okay, thus endeth the key section everyone is talking about.

Different Chip Architectures

  1. Why doesn’t Nvidia also make more modern versions of N7 chips or similar? Jensen replies it is not necessary, then gives the real answer of R&D costs.

    1. I buy this. Focus is crucial for a company like Nvidia. Better to spend all your engineers in making the next chip ten times better.
  2. What about completely different chip architectures? They don’t have a better idea, they simulate the other options, they’re worse. He’s folding Groq into CUDA, tokens are worth paying for now, and he’d like to invest more in Nvidia architecture.

    1. All seems fair, except it’s odd to say ‘if I had more money’ as the head of Nvidia? Seems like he should have all the money he needs for this?
  3. Where would Nvidia be today without deep learning? Accelerated computing.

  4. Jensen reiterates that enjoyed that interview.

    1. I can believe it. Even though he seemed highly frustrated and tilted, how often does someone like Jensen get to have a real argument? How many people actually push back? It can quite the relief, in its own way.

    2. I’m going to see Jensen accept an award next week, because I randomly got invited.

The Online Reactions On Export Controls

Daniel Eth and Connor Williams are among those who view Jensen’s arguments against export controls as fully amoral, purely about making money, and as not remotely making sense. There were also many others.

Dmitri Alperovitch: Incredible interview with Jensen. He blatantly admits that his jihad against export controls is simply all about Nvidia selling more chips worldwide, not about national security or winning the AI race against China (which he previously said doesn't even matter if we win)

I think such reactions are about one notch too harsh. But basically yes, these are the strongest arguments Jensen can make, and they are quite weak.

Tenobrus: jensen in the dwarkesh interview isn’t “wrong” per se. he simply does not care about the truth. he cares about selling as many Nvidia chips as possible, whatever the consequences. he’s very visibly engaging in motivated reasoning to justify this. why would we expect different?

This would lower my opinion of Jensen as a thinker and communicator and humanist and american if it were already high on any of those dimensions. but it's not. my opinion of him is high as a businessman, and as a businessman he wants to sell chips to china.

deeply appreciate @dwarkesh_sp for pushing as hard as he did here. i hope the visibility of Jensen’s incoherence on this makes it harder for Nvidia to justify themselves going forward

Alec Stapp: This is the key moment between Jensen and Dwarkesh on export controls:

1. Dwarkesh asks why it’s okay to sell NVIDIA chips to China given the national security implications of AI models like Mythos.

2. Jensen gives a misleading answer, arguing that it’s okay to sell American chips to China because China already produces 60% of the world’s chips.

3. But as Jensen definitely knows, compute is measured in flops, not number of chips.

4. Dwarkesh then pushes back, pointing out that on a flops basis, China has 10% of the compute the US has, and giving them more compute would change their cyber capabilities.

This exchange shows why it’s critically important for interviewers to have at least some technical knowledge, so they can push back against misleading talking points.

Alex Imas: Jensen has been doing what seems like a 24/7 interview cycle for months, and the number one question from the beginning should have been this exact exchange.

I don't know if it's the decline of old media---where journalists are just not pushing and asking questions in the same "investigative" style that they used to---or something else. But I'm glad we have @dwarkesh_sp to do the research and shine the light.

William Buckskin: I like Jensen, but this is exactly why we have government.

He’d sell us out for China for his investors. We obviously can’t allow that to happen

I don’t overly begrudge Jensen being a capitalist who will sell to whoever wants to buy, and leaving it to others to decide to whom he is permitted to sell. The issue is that he keeps trying to mislead us to get permission.

There are those in these exchanges who attempt to defend Jensen, you can find them if you click through, but I also found those arguments quite poor. This from Ed Elson was the most serious attempt I’ve seen, but his own metaphors go in the other direction if you think them through.

Here is one full explanation, responding to the distillation.

Peter Wildeford: Jensen here is frustrating and wrong. The man wrote off billions so of course he opposes controls.

1. Mythos is a ~10T parameter model trained on Amazon Trainium. Despite Jensen's best efforts, China doesn't have ]Blackwell or similarly capable] chips thanks to export controls.

Huawei's best chip delivers 1/3 the per-chip performance, at 2.5x the power cost, with yields >12x worse. Jensen calling Mythos "fairly mundane capacity" that's "abundantly available in China" is just plainly false.

2. Dwarkesh is right that the compute ratio matters geopolitically. Maintaining a capability lead during the critical window — even 12-18 months — is the whole point of controls. The difference between China running a thousand vs. a million offensive AI agents is huge. Jensen dodges this entirely.

3. Jensen can't simultaneously argue "controls failed because China innovated anyway" (DeepSeek) AND "we must sell to China or they'll leave our ecosystem." If they'll innovate regardless, selling chips doesn't buy the loyalty he claims.

4. Jensen's ecosystem stickiness point (x86, Arm) is his strongest argument, but it cuts against him: the world is already locked into CUDA. Selling Nvidia chips to China doesn't deepen that - it just gives China better hardware while they build Huawei alternatives regardless.

An obvious point several people hammered, that I also noticed: If China has the energy to use unlimited chips, that’s all the more reason not to sell them the chips.

Theo Bearman: Jensen on China: "The amount of energy they have is incredible. Isn't that right? AI is a parallel computing problem, isn't it? Why can't they just put 4x, 10x, as many chips together because energy's free? They have so much energy. They have datacenters that are sitting completely empty, fully powered. You know they have ghost cities, they have ghost datacenters too. They have so much infrastructure capacity. If they wanted to, they just gang up more chips, even if they're 7nm."

This is exactly why we need to ramp up export controls across all elements of the semiconductor manufacturing stack rather than help the Chinese maximally leverage their advantage in ready-to-deploy powered shells with leading American GPUs and "50% of global AI researchers" to boot. The US doesn't currently have that luxury, with long lead times for power and cooling components, permitting and data-centre buildout.

With UKAISI now saying AI capabilities are doubling every four months, the net result of Jensen's strategy to try and get China hooked on the American tech stack will be one thing: the surrender of Western AI advantage, perhaps for good. Sure, there might be downsides to going heavy on export controls, but the alternative is much worse.

Peter Wildeford: Jensen apparently was also unintentionally making the case *for* export controls on Dwarkesh:

"They [China] have datacenters that are sitting completely empty, fully powered. You know they have ghost cities, they have ghost datacenters too."

Imagine if China had the chips!

Peter also explains that Huawei can currently match 1%-4% of China’s market demand, and that China’s government is going to ensure unlimited demand for Huawei chips regardless, and they’d push out Nvidia to do it if necessary when and if that time comes.

Yes, Huawei production will expand over time, although likely not in the short run due to bottlenecks. But even if they do, so will Chinese demand, and it is not obvious they are on a path to catch up, even with an inferior product.

Or at Benjamin Todd put it:

[

](https://substackcdn.com/image/fetch/$s_!j-WE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f129868-71de-4670-b9dd-7db1b4880e6d_900x491.jpeg)

Is This About Being Superintelligence Pilled?

This certainly is a major factor in how you view such arguments, and rightfully so, but as I’ve said throughout, I don’t think you need to believe in AGI/ASI in order to think Jensen is wrong about export controls.

Sriram Krishnan: Every person here's reaction to the Jensen + @dwarkesh_sp podcast can be extrapolated *directly* from whether they believe in the frontier labs achieving short timelines for AGI/ASI.

If you believe in the labs achieving RSI and then AGI/ASI (for some definition of all three) in the next few years, you'll probably sympathetic to the frame @dwarkesh_sp adopts.

If not, you're probably more sympathetic to the arguments from Jensen.

(if anyone here doesn't fall into this pattern, would love to hear!)

I would put it this way:

  1. If you believe AGI/ASI is plausible in the medium term (as in up to ~10 years), then the case Jensen makes against export controls is completely unconvincing.

    1. There are still arguments you can make against export controls, that might have some merit, but I would file those arguments under ‘galaxy brain takes.’
  2. If you don’t believe AGI/ASI is plausible for more than 10 years, and perhaps indefinitely, then you should be more receptive to Jensen’s argument, but you should still reject Jensen’s arguments for the reasons I argued throughout.

    1. Dean Ball makes a related point here. In light of Mythos, and any reasonable expectation of what AI can do in the next few years, you don’t need to believe in ‘AGI’ you only need to believe in important strategic implications of AI, and we are already there today, which is enough to invalidate Jensen’s case.
  3. If you not only don’t believe AGI/ASI is plausible, but you also think that it won’t matter much who has access to the bulk of the compute in the medium term, and it also doesn’t much matter whose models and applications people use, then and only then are Jensen’s arguments strong.

    1. As in, you think AI won’t much matter, so might as well make money on chips.

    2. But if so, you should probably also be short the market, especially Nvidia.

    3. Are you short the market?

    4. Also, we straight up don’t live in such a world. Between Claude Code and Codex, GPT-5.4, Opus 4.6+ and Mythos, we have ruled it out.

  4. You could make a steelmanned version of Jensen’s argument, that has been made by the likes of David Sacks, which is that dominance of Nvidia hardware and CUDA within China also leads to dominance of American models and applications, because they form one coherent ‘tech stack.’

    1. I think that argument is false on the merits, for overdetermined reasons, even if you don’t believe in AGI/ASI, because it describes a world we don’t live in.

    2. I can imagine such a world existing, but it would look very different.

What should we think about the failure of Jensen to find better arguments?

Dean W. Ball: It’s a shame Jensen mostly fails here, because the monoculture on export controls is bad. If you’re a young AI policy researcher trying to make a name for yourself, it is almost impossible to be taken seriously unless you are pro export controls. Monocultures are usually bad.

Policy debates should not appear one sided, except when the sides are:

  1. Make everyone else worse off so I can make more money.

  2. No.

Position number one often wins such debates, because the special interest cares quite a lot about concentrated benefits, versus others caring less about diffuse costs. But yes, in cases where someone is seeking rent, or seeking to do something destructive, you will get a very one sided policy debate on the merits.

If the policy debate is one sided, I want to believe that the policy debate is one sided.

If the policy debate is not one sided, I want to believe that the policy debate is not one sided.

Thus, if good arguments against export controls exist, we want to hear them, even if ultimately we think export controls are good. Also, if they exist, I haven’t heard them.

The lack of being sufficiently pilled is also, again, why Jensen ‘lost’ Anthropic, and also a lot of how the current United States government tried to ‘lose’ Anthropic, at a time when the mistakes was a lot less understandable.

Dean W. Ball: In this regard the most interesting moment in Jensen/Dwarkesh is not the debate about chip export controls but instead where Jensen says he didn’t understand Anthropic’s scaling needs when approached about an investment in them a couple years ago. He admits he was un-pilled.

The Biden administration officials and EAs, who jensen casts as technologically clueless, would have understood Anthropic’s scaling needs much more intuitively a couple years ago than Jensen admits to in that interview. It’s not about savvy or intellect, it’s about pilledness.

Matt Beard raises an excellent point, and highlights Jensen saying “Although AI is the conversation today” when trying to downplay TPUs, so yeah, still highly unpilled.

Jensen’s Arguments Are Poor Both Logically And Rhetorically

There are (at least) two ways an argument can be poor.

  1. An argument can be logically poor, and without underlying merit.

  2. An argument can be rhetorically poor, and unconvincing to listeners.

The problem with Jensen’s arguments, and accelerationist AI arguments in general, is that they are usually poor in sense #1, and consistently poor in sense #2, at least when applied to general politics or the public.

Anton Leicht is warning accelerationists that they are slowly but surely losing ground. The strategy has been to argue against any and all asks and insist on nothing and playing pure hardball politics, without the rhetoric and support to back it up, and that failure to try and shape the eventual rules or get ahead of actual harms works until it spectacularly doesn’t.

Those who buy these arguments were always rather niche, and as AI capabilities advance that becomes more true every day, including today with Opus 4.7.

Dean W. Ball: Dwarkesh/Jensen reveals how inconsistent and un-battle-tested AI acceleration talking points are, especially when they are filtered through the prisms of corporate comms and mass politics. Strategically coherent accelerationism is possible (I try!), but not currently prevalent.

I really do say this as an accelerationist fundamentally. It has always been clear that the default ai acceleration stance developed most especially during sb 1047 was not going to stand the test of time (the default anti 1047 argument hinged on ai not improving very much and a funhouse conception of diffusion as “a totally intractable mystery problem” rather than “an obstacle”; this is basically still the default ai acceleration argument), and that a new, more complex path would be needed.

I’ve tried to do chart this path in my own mostly-between-the-lines way but I’m just going to be explicit for a moment that a new approach is obviously going to be needed _for those who are excited about AI, think it’s likelier than not to go well (especially with the big risks competently managed and self-aware strategic execution) and want to embrace it with alacrity_.

Dean W. Ball: However, one must acknowledge that, even though Jensen said it in the midst of discursive retreat, “that loser premise makes no sense to me” goes hard as a phrase

[

](https://substackcdn.com/image/fetch/$s_!wNXP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa76ad542-7058-43f5-957f-ff1dd2870c34_828x459.jpeg)

Nathan Calvin: I generally away from that interview completely unpersuaded by Jensen's arguments, but convinced the man is a force of a nature and cool in a sort of brutal ur-techno-capitalist way

Dean W. Ball: in a sense the flex has always been the logical inconsistency

I think ‘loser premise makes no sense to me’ is an extremely telling phrase into Jensen’s psychology.

I think it is causal. As in, that premise would make me a loser, ergo it makes no sense.

Cause if there’s one thing to know about Jensen Huang? He’s a winner.

Read the whole story
bogorad
2 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

China's AI Giveaway Finally Pays. Thank the Agents.

1 Share
  • Strategic Monetization: Chinese Ai Firms Shifted From Open Access Models To Profitable Token Based Revenue Streams Through Agent Integration
  • Increased Consumption: Automated Agent Workflows Resulted In Substantially Higher Daily Token Usage Compared To Standard Human Chatbot Interactions
  • Pricing Power: Significant Increases In Api Costs By Industry Leaders Did Not Diminish Call Volumes Demonstrating Inelastic Demand
  • Structural Reorganization: Major Tech Providers Consolidated Disparate Divisions Into Specialized Token Hubs Focused On Commercializing Token Consumption
  • Licensing Changes: Commercial Restrictions On Previously Open Models Reflect A Tactical Pivot Toward Securing Long Term Business Viability
  • Distribution Moats: Cloud Giants Integrating Models Across Existing Consumer Ecosystems Gained A Competitive Advantage Over Pure Play Labs
  • Market Risks: Potential Future Disruptions From Aggressive Pricing Strategies By Competitors Like Deepseek Threaten To Stabilize Or Lower Industry Margins
  • Global Impact: Increased Chinese Market Revenue And Operational Scale Have Shifted The External Focus To Concerns Regarding Competitive Pricing And Regulatory Sanctions

Last November at the University of Hong Kong, Joe Tsai tried to explain why Alibaba was giving away its flagship AI model for free. The room of students wanted a punch line. Tang Heiwai, the associate dean, pressed him: how do you guys make money by being so generous? Tsai smiled. "We don't make money from AI," he said. "That's the answer."

Five months later, the punch line arrived.

Zhipu AI raised API prices 83 percent in the first quarter of 2026. Call volumes jumped 400 percent anyway. Tencent Cloud hiked pricing on its Hunyuan series by more than 400 percent. Moonshot's Kimi K2.5 earned more in roughly 20 days than the company made in all of 2025, according to Voice of Context. Alibaba reorganized its entire AI operation into a single unit called the Token Hub, with CEO Eddie Wu writing a letter whose only recurring word was tokens.

The giveaway finally found a cash register. It took an AI agent to install it.

The Argument

  • Chinese open-source AI was a loss leader waiting for something to monetize. Agents turned out to be it.
  • Zhipu raised API prices 83 percent in Q1 2026 and call volumes still jumped 400 percent. Demand stopped responding to price.
  • The licenses are quietly closing. MiniMax restricted commercial use, Alibaba reorganized into a Token Hub, Z.ai and Baidu are hiking prices.
  • Cloud giants with distribution win. Pure-play labs without a moat look less durable than their share prices suggest.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

The meter that never spun

For two years, Chinese open-source AI was a loss leader in search of a profitable side dish. Alibaba's Qwen hit nearly a billion cumulative downloads. DeepSeek's R1 reset global pricing. MiniMax and Zhipu rode the wave to blockbuster Hong Kong listings. Every model was cheap or free. Every margin was thin.

That was the plan. The formal name is commoditizing your complement. Make one layer of the stack effectively worthless, then charge for the adjacent layer that users now depend on. Google did it with Android. Apache did it with servers. Alibaba did it with Qwen, expecting to monetize cloud the way Tsai's hotel metaphor predicted: free room, paid minibar.

The minibar stayed empty.

Alibaba Cloud's computing margins still sit in the single digits, according to the company's latest earnings. OpenAI and Anthropic book 40 to 50 percent on closed models. Daniel Yue, a professor at Georgia Tech's business school, calls this the price of commoditizing yourself too well. When every model is free and roughly equivalent, nobody has pricing power. Not even you.

Every token call looked more or less the same through that period. A user typed a prompt. A model produced a completion. The interaction ended. Even heavy users were buying one-turn conversations, then walking away. That usage pattern is poison for anyone hoping to charge premium prices. Switching costs are near zero. Volumes are small per user. Benchmark differences of three or six months barely justify moving inference between providers. Zhang Jiang, a consultant who advises Chinese AI startups, told the South China Morning Post that Chinese firms were stuck in a volume-driven race instead of a value-driven one. Abundance was the only lever they had left.

The industry had built distribution without a way to charge for it. For years, nothing downstream had pricing power.

What agents rewired

OpenClaw changed the unit of sale.

An agent running a research task does not call a model once. It plans. It decomposes. It calls the model again, reads the result, reasons about it, retries when things break. A single overnight session can burn token consumption that dwarfs an entire month of chatbot use by the same person. MiniMax's average daily token consumption on its M2 series climbed six-fold between December 2025 and February 2026. Zhipu CEO Zhang Peng told a recent industry event that users now leave OpenClaw running while they sleep, an intensive process he described as potentially boosting demand exponentially.

That is not a marginal shift. That is a different product.

Briefings like this, straight to your inbox

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Agents create something chatbots never did: lock-in. Once an agent is configured with a specific model's tool-calling format, memory structure, and prompt conventions, switching providers mid-workflow breaks things. You can argue the models are interchangeable. In practice, enterprises running agent workloads on top of Qwen or GLM-5 stay put, and our earlier reporting on Nvidia's open-weights strategy made the same point from a different angle: open is a wedge, not a gift.

What agents produce, in economic terms, is inelastic demand. Users can't easily switch. They need more tokens per task. That is the exact shape of a profitable market. Zhipu proved it the hard way. API prices up 83 percent in Q1. Call volume up 400 percent at the same time, according to Zhang's own earnings call. The math doesn't lie. Demand stopped responding to price.

When the licenses started changing

If open source was ever an ideological commitment, this would be the moment to double down. The mask started slipping instead.

MiniMax released its latest M2.7 model and amended its terms of use to prohibit commercial use without authorisation. The open-source community felt blindsided. Florian Brand, a San Francisco-based analyst of China's AI ecosystem, told SCMP that licensing restrictions could erode Chinese models' popularity among global developers given the wealth of alternatives. Daniel Yue argued the opposite, noting that such moves are not new in open-source software, just a sign that companies are trying to secure long-term commercial viability.

Alibaba and Z.ai released some of their newest models as closed-source at launch. Both, plus Baidu, hiked prices across models and cloud services. Last month, Alibaba reorganised five separate AI units, Tongyi Laboratory, Qwen, the Bailian MaaS platform, the Wukong enterprise agent arm, and an AI innovation group, into what it now calls the Alibaba Token Hub, reporting directly to Eddie Wu. "ATH is built around a single organising mission," Wu wrote in the announcement letter. "Create tokens, deliver tokens and apply tokens."

Read that sentence again. Not one mention of openness. Not one mention of the developer community. The word that appears three times is tokens.

Liu Liehong, the head of China's National Data Administration, formally christened the term at a March State Council press conference with a new Chinese word: ciyuan. Tokens are now, in Liu's phrase, the settlement unit linking technological supply with commercial demand. China's daily token calls rose from 100 billion at the start of 2024 to 140 trillion by March 2026, according to the National Data Administration. Chinese models held the top six spots on OpenRouter's global weekly rankings for the week ending April 5, with Qwen3.6-Plus setting a single-day record of more than 1.4 trillion tokens, according to OpenRouter data cited by People's Daily.

The strategy was never charity. It was patience.

Who this leaves standing

Cloud giants with distribution already win. Tencent's QClaw embeds OpenClaw inside WeChat's 1.3 billion users, according to China Briefing. Alibaba's Qwen agent reaches 300 million monthly actives across Taobao, Tmall, and Alipay. ByteDance's Doubao has 315 million chatbot users and a phone launched with ZTE in December. Distribution plus agent integration equals a meter per customer, spinning whenever the customer sleeps.

Pure-play labs without a distribution moat look less durable than their share prices suggest. MiniMax and Zhipu are up hundreds of percent from their IPO prices. Both are still bleeding. MiniMax posted a $250 million adjusted net loss on $79 million in 2025 revenue. Zhipu's total losses reached $680 million against $104.8 million in revenue, driven by R&D spending up 45 percent. They are effectively becoming suppliers to the cloud giants that can embed their models into hosted agent platforms. You can see why MiniMax amended its license. Cornered suppliers write harder contracts.

Developers outside China lose something too: the assumption that open source means permanently free. Kimi K2.5 charges $0.58 per million input tokens against Anthropic's Opus 4.5 at $5.00. Qwen dominates six of the top ten OpenRouter slots. But licenses can be rewritten. API prices can jump 400 percent in a quarter. The cheap option is a business decision, not a natural law. If you are building an enterprise workflow on Chinese open weights, you should price it that way.

Washington is watching a different angle of the same shift. The House Foreign Affairs Committee is weighing the Deterring American AI Model Theft Act, a bill from Representative Bill Huizenga that would sanction Chinese entities accused of adversarial distillation from US models. The underlying anxiety is not about openness. It is about pricing power. Cheap Chinese open weights undercut revenue the US labs need to pay for data centers and staff. Now that those weights have a profit mechanism attached, the fight gets harder.

What to watch

Tsai's hotel metaphor gave Alibaba a clean story. Open the room, charge for the service. For two years the rooms were open and nobody came to the lobby. Now a Shanghai engineer leaves an agent running overnight, a Shenzhen sole founder collects a Longgang district subsidy to automate her one-person company, and the meter finally spins.

Watch DeepSeek. Its next model is rumoured for release this month. R1 reset the entire market last year by crushing prices and forcing every competitor open. If DeepSeek does it again, the cash registers that just got installed could fall silent as fast as they filled up. If it does not, Chinese AI has its first real profit lever since the boom began.

The giveaway was always a bet on what came next. Agents are the next. Now you find out what the bet was worth.

Frequently Asked Questions

What changed for Chinese AI models in early 2026?

Agent-based usage via OpenClaw multiplied token consumption per session. Zhipu raised API prices 83 percent in Q1 2026 and saw call volumes rise 400 percent anyway. Tencent Cloud hiked Hunyuan pricing over 400 percent. Moonshot's Kimi K2.5 earned more in 20 days than Moonshot made in all of 2025. Chinese models finally gained pricing power their open-source strategy had been missing.

Why was Chinese open-source AI barely profitable before agents arrived?

Models were commoditized, switching costs were near zero, and most usage was one-turn chatbot calls. Alibaba Cloud's computing margins stayed in single digits while OpenAI and Anthropic booked 40 to 50 percent on closed models. With every Chinese model free and roughly equivalent, no provider had meaningful pricing power.

What is the Alibaba Token Hub?

A March 2026 reorganization that consolidated Tongyi Laboratory, Qwen, the Bailian MaaS platform, the Wukong enterprise agent unit, and an AI innovation group into a single business unit reporting to CEO Eddie Wu. Wu's announcement letter defined its mission as create, deliver, and apply tokens. The word openness did not appear.

Are Chinese AI companies still open source?

Selectively. MiniMax amended its M2.7 terms of use to prohibit commercial use without authorisation. Alibaba and Z.ai released some recent models closed-source at launch. Baidu and Alibaba both hiked prices across models and cloud services. The pattern suggests open-source was strategic, not ideological.

What could disrupt this pricing power?

A new DeepSeek release. The Hangzhou lab's R1 model last year reset global pricing and forced competitors to open up and cut prices. The next model is rumoured for this month. If DeepSeek ships with similar impact, the cash registers Chinese AI just installed could fall silent as quickly as they filled up.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

[

Zuckerberg's Bots Run Meta. Cursor's Bot Ran From Beijing.

San Francisco | Monday, March 23, 2026 Mark Zuckerberg is building an AI agent to help manage Meta. His employees already built their own, and the bots now talk to each other autonomously. One trigge

The Implicator

](https://www.implicator.ai/zuckerbergs-bots-run-meta-cursors-bot-ran-from-beijing/)

[

The Bots Built Their Own Reddit. 147,000 Signed Up in Three Days.

Moltbook is a social network where AI agents post, argue, and gossip about the humans who own them. It grew out of OpenClaw, the open-source personal assistant formerly known as Clawdbot, then Moltbot

The Implicator

](https://www.implicator.ai/the-bots-built-their-own-reddit-147-000-signed-up-in-three-days/)

[

Pactum CEO Warns Agents on Moltbook Have "Crossed the Rubicon," Calls for Immediate Guardrails

Kaspar Korjus has spent six years building AI agents that negotiate billion-dollar procurement contracts for Walmart, Rolls-Royce, and Nestlé. He co-founded Pactum, an Estonian startup that has invest

The Implicator

](https://www.implicator.ai/pactum-ceo-warns-moltbook-agents-have-crossed-the-rubicon-calls-for-immediate-guardrails/)

Analysis

Share X / Twitter LinkedIn Email

Harkaram Grewal

Harkaram Grewal

New Delhi

Freelance correspondent reporting on the India-U.S.-Europe AI corridor and how AI models, capital, and policy decisions move across borders. Covers enterprise adoption, supply chains, and AI infrastructure deployment. Based in New Delhi.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

White House Works to Give US Agencies Anthropic Mythos AI - Bloomberg

1 Share
  • Federal Access To Mythos: The White House is establishing security protocols to provide federal agencies with access to Anthropic's restricted artificial intelligence, Mythos.
  • Cybersecurity Risk Concerns: Due to potential misuse by hackers for data theft or infrastructure sabotage, Anthropic has severely limited the public release of the high-capability Mythos model.
  • Software Engineering Update: Anthropic has launched Opus 4.7, an updated AI model optimized for coding tasks, which includes built-in safeguards to restrict high-risk cybersecurity applications.
  • Chinese Real Asset Securities: Data center operators in China are leveraging asset-backed securities to raise capital, with issuance volumes experiencing substantial growth over the previous year.
  • Financial Market Expansion: These real asset-backed financial products, which provide investors with yields linked to infrastructure performance, are increasingly utilized as an alternative funding channel to traditional borrowing.

Technology

|Cybersecurity

The White House in Washington.

Photographer: Stefani Reynolds/Bloomberg

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/white-house-moves-to-give-us-agencies-anthropic-mythos-access)

By Jake Bleiberg and Margi Murphy

April 16, 2026 at 5:54 PM UTC

Updated on

April 16, 2026 at 6:51 PM UTC

Takeaways by Bloomberg AI

  • The US government is preparing to make a version of Anthropic PBC's artificial intelligence model, Mythos, available to major federal agencies amid concerns it could increase cybersecurity risk.
  • The White House Office of Management and Budget is setting up protections to allow agencies to use Mythos, with more information to be provided "in the coming weeks".
  • Anthropic has limited the release of Mythos due to concerns that hackers could weaponize its capabilities to steal data or sabotage victim networks, and has briefed senior US government officials on the model's capabilities.

The US government is preparing to make a version of Anthropic PBC’s powerful new artificial intelligence model available to major federal agencies amid concerns that the tool could sharply increase cybersecurity risk, according to a memo reviewed by Bloomberg News.

Gregory Barbaccia, federal chief information officer of the White House Office of Management and Budget, told officials at Cabinet departments in an email Tuesday that OMB is setting up protections that would allow their agencies to begin using the closely guarded AI tool, Mythos.

The email doesn’t say definitively that the various agencies will get access to Mythos, nor does it provide a timeline for when it might come or how they might use it. It tells top technology and cybersecurity chiefs to expect more information “in the coming weeks.”

US officials have previously urged private sector organizations to use Mythos to improve their cybersecurity. The Treasury Department has been seeking access to Mythos in order to uncover its own software flaws, Bloomberg has reported.

Anthropic has only provided Mythos to a limited group of technology companies, financial firms and others, urging them to use it to assess their cybersecurity risk. The firm limited the release of Mythos amid concerns that hackers could weaponize its capabilities to steal data or sabotage victim networks.

Before its limited release of Mythos, Anthropic briefed senior officials across the US government on the model’s full capabilities, including both its offensive and defensive cyber applications, according to a company official who spoke on condition that they not be identified discussing the talks with government. The talks included staff at the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation, among others, the company official said, and Anthropic has continued to work with government on security issues arising from the model.

Read More: Why Anthropic Won’t Release Mythos to the Public

Barbaccia’s message was sent as leaders from Washington to Wall Street are grappling with the possibility that the model could make it dramatically easier for hackers to find ways to break into sensitive computer systems in industry and government.

“We’re working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place before potentially releasing a modified version of the model to agencies,” Barbaccia wrote in the email, which had the subject, “Mythos Model Access.”

A White House official said in an email that the government continues to work and engage with AI companies to ensure their models help secure critical software vulnerabilities. They didn’t answer specific questions on the matter.

Anthropic declined to comment.

Neither Anthropic nor the government said what, if any, federal agencies have gotten early access to Mythos.

Barbaccia’s email went to officials with the Department of Defense, Department of Treasury, Department of Commerce, Department of Homeland Security, Department of Justice and Department of State, among several other agencies.

The move to roll a version of Mythos out to the agencies signals the government’s continued interest in Anthropic’s tools despite a public spat and ongoing legal fight between the company and top Trump administration officials.

The Pentagon this year declared Anthropic a supply chain threat, under an authority normally reserved for foreign adversaries, over a dispute about over artificial intelligence safeguards. The company won a court order last month blocking a ban on government use of the technology, after Anthropic argued the move could cost it billions of dollars in lost revenue.

Within Anthropic, company leaders became worried the model could be a national security risk after testers were able to use Mythos to turn up the types of critical bugs that it would normally take the world’s best hackers to uncover. These concerns prompted the company’s limited release of the model.

It’s similarly set off alarms in various parts of the US government.

Among officials focused on national defense, the introduction of Mythos has created profound uncertainty about how to evaluate cybersecurity risk, a person familiar with the matter previously told Bloomberg. Equipping an individual hacker with the model, or similar AI tools, would likely be a transformation equivalent to turning a conventional soldier into a special forces operator, the person said.

On the day, Anthropic publicly disclosed Mythos’ existence, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened Wall Street leaders for a meeting in Washington to urge them to use the model to find weaknesses in their own systems.

— With assistance from Rachel Metz

(Updated with additional context throughout.)

Get Alerts for:

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/white-house-moves-to-give-us-agencies-anthropic-mythos-access)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[

Before it's here, it's on the Bloomberg Terminal

LEARN MORE

](https://archive.is/o/OauPd/https://www.bloomberg.com/professional/solution/bloomberg-terminal-learn-more/)

Up Next

Anthropic Releases AI Model With Weaker Cyber Skills Than Mythos

In this Article

[

Anthropic PBC

Private company

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/1892140D:US)

More From Bloomberg

[

European Political Community Summit

Spanish Premier Pedro Sánchez’s Wife Charged With Corruption

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-13/spanish-prime-minister-s-wife-charged-with-influence-peddling)[

Anthropic CEO And Co-Founder Dario Amodei

Anthropic’s Mythos Is a Wake-up Call For Everyone, Not Just Banks

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-15/anthropic-mythos-ai-is-a-wake-up-call-for-everyone-not-just-banks)[

Inside Third Man Pressing's Vinyl Record Plant As US Factory Gauge Climbs

US Factory Output Rebound in First Quarter Extended Beyond AI

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-16/us-factory-output-rebound-in-first-quarter-extended-beyond-ai)[

US Plans Record $100 Billion Bill Sale As Borrowing Needs Mount

US Treasury Seeking Access to Anthropic’s Mythos to Find Flaws

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-14/us-treasury-seeking-access-to-anthropic-s-mythos-to-find-flaws)[

TOPSHOT-FRANCE-TECHNOLOGY-SCIENCE-IT-AI-ANTHROPIC

With Mythos, Anthropic Deserves Support, Not a Blacklisting

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-13/anthropic-sets-the-right-ai-standard-with-mythos)

Top Reads

[

How Anthropic Learned Mythos Was Too Dangerous for the Wild

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/how-anthropic-discovered-mythos-ai-was-too-dangerous-for-release)[

Cómo Anthropic descubrió que Mythos era demasiado peligroso

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/como-anthropic-descubre-riesgos-del-modelo-de-ia-mythos)[

DIMSUM_CMS_DO_NOT_USE_Untitled

A Dozen Young Job Hunters on What It Takes to Get Hired

by Marin Cogan

](https://archive.is/o/OauPd/https://www.bloomberg.com/features/2026-job-hunt-stories/)[

Canva Leans Into AI to Defend Its $42 Billion Empire

by Olivia Poh

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/canva-australia-s-tech-unicorn-turns-to-ai-for-risky-transformation)

Technology

|AI

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/anthropic-unveils-updated-opus-model-aimed-at-advanced-coding)

By Rachel Metz

April 16, 2026 at 2:31 PM UTC

Updated on

April 16, 2026 at 2:54 PM UTC

Takeaways by Bloomberg AI

  • Anthropic PBC is introducing an updated version of its AI model, Opus 4.7, which is meant to be better at software engineering.
  • Opus 4.7 is less broadly capable than Mythos, including for cybersecurity uses, and Anthropic has implemented safeguards to detect and block prohibited or high-risk cybersecurity uses.
  • Anthropic is releasing Opus 4.7 while limiting the release of Mythos, which the company says can identify and exploit vulnerabilities in major operating systems and web browsers.

Anthropic PBC is introducing an updated version of its most powerful, widely available AI model, barely a week after it limited the release of a more advanced offering called Mythos.

Anthropic said Thursday that Opus 4.7 is meant to be better at software engineering, including fielding some of the hardest coding tasks that previously required greater supervision. The company said it follows a user’s instructions much better than previous models and can inspect higher-resolution images, making it possible to do things like identify details in complicated charts or pictures.

However, the new model is less broadly capable than Mythos, including for cybersecurity uses, Anthropic said. During the training process for Opus 4.7, Anthropic went so far as to experiment with ways to “differentially reduce” the model’s cyber capabilities, according to a company blog post.

WATCH: Anthropic PBC introduces an updated version of its AI model, Opus 4.7.

Last week, Anthropic warned that its Mythos system was able to identify and then exploit vulnerabilities “in every major operating system and every major web browser when directed by a user to do so.” The company decided to only make a version of the model available to select businesses to help them safeguard their software.

“We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,” the company said in the blog post. “What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.”

The Claude maker is locked in a heated rivalry with OpenAI to deploy better artificial intelligence models and convince more business customers to pay for them. In recent months, Anthropic has seen strong momentum for its AI coding offerings as well as growing traction with consumers amid a standoff with the Pentagon over AI safeguards.

Anthropic was most recently valued at $380 billion. The company is now fielding offers from investors for a new round of funding that could value it at about $800 billion or higher.

Follow all new stories by Rachel Metz

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/anthropic-unveils-updated-opus-model-aimed-at-advanced-coding)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[

Before it's here, it's on the Bloomberg Terminal

LEARN MORE

](https://archive.is/o/OauPd/https://www.bloomberg.com/professional/solution/bloomberg-terminal-learn-more/)

Up Next

Anthropic Releases AI Model With Weaker Cyber Skills Than Mythos

In this Article

[

Anthropic PBC

Private company

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/1892140D:US)

More From Bloomberg

[

House Democrats, Biden At Odds Over Enhanced Child Tax Credit

White House Works to Give US Agencies Anthropic Mythos AI

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-16/white-house-moves-to-give-us-agencies-anthropic-mythos-access)[

Anthropic Model Scare Sparks Urgent Bessent, Powell Warning To Bank CEOs

Anthropic Attracts Investor Offers at an $800 Billion Valuation

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-14/anthropic-attracts-investor-offers-at-a-800-billion-valuation)[

TOPSHOT-FRANCE-TECHNOLOGY-SCIENCE-IT-AI-ANTHROPIC

With Mythos, Anthropic Deserves Support, Not a Blacklisting

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-13/anthropic-sets-the-right-ai-standard-with-mythos)[

Anthropic CEO And Co-Founder Dario Amodei

Anthropic’s Mythos Is a Wake-up Call For Everyone, Not Just Banks

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-15/anthropic-mythos-ai-is-a-wake-up-call-for-everyone-not-just-banks)[

Anthropic Model Scare Sparks Urgent Bessent, Powell Warning To Bank CEOs

Anthropic Wins Accolades From Canada’s AI Minister Over Mythos Approach

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-14/anthropic-wins-accolades-from-canada-s-ai-minister-over-mythos-approach)

Top Reads

[

How Anthropic Learned Mythos Was Too Dangerous for the Wild

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/how-anthropic-discovered-mythos-ai-was-too-dangerous-for-release)[

Cómo Anthropic descubrió que Mythos era demasiado peligroso

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/como-anthropic-descubre-riesgos-del-modelo-de-ia-mythos)[

Canva Leans Into AI to Defend Its $42 Billion Empire

by Olivia Poh

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/canva-australia-s-tech-unicorn-turns-to-ai-for-risky-transformation)[

India’s Computer Science Grads Are Unprepared for the AI Revolution

by Saritha Rai

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/the-ai-revolution-leaves-india-s-tech-graduates-unprepared-for-the-future)

Technology

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-14/china-s-data-centers-are-plugging-into-reit-style-financing-wave)

By Bloomberg News

April 14, 2026 at 11:00 PM UTC

Takeaways by Bloomberg AI

  • China's data center operators are raising money from investors through a fast-growing asset-backed security market, with major operators selling about 9 billion yuan of these products over the past year.
  • The securities offer returns linked to the performance of underlying assets such as data centers, logistics parks or shopping malls, and typically generate annual distribution rates of around 4% to 8%.
  • The market for these securities has seen rapid uptake since its introduction less than three years ago, with almost 70% of total issuance coming in the past six months and volumes jumping more than tenfold from a year earlier.

China’s data center operators are tapping a fast-growing asset-backed security market, raising more than a billion dollars from investors hungry for higher yields.

The niche funding tool — known as holding-type real asset-backed securities — has seen rapid uptake since it was introduced less than three years ago. Almost 70% of total issuance has come in just the past six months, and volumes so far in 2026 have jumped more than tenfold from a year earlier, according to cn-abs.com, which tracks the local ABS market.

Major data center operators including GDS Holdings Ltd., VNET Group Inc. and Shanghai Yovole Networks Inc. have joined the wave, selling about 9 billion yuan ($1.3 billion) of the privately-placed products over the past year, the data show.

The unrated securities sit somewhere between traditional ABS and real-estate investment trusts, giving issuers more leeway in payments and fewer regulatory hurdles. Returns are more like dividends — instead of fixed coupons — linked to the performance of underlying assets such as data centers, logistics parks or shopping malls. China has been pushing to diversify funding channels beyond traditional borrowing and public listings.

Sales of Holding-Type Real Asset-Backed Securities Are Surging

Source: <a href="http://Cnabs.com" rel="nofollow">Cnabs.com</a>, Bloomberg

“This asset class has taken off largely because regulators see it as an effective way for companies to raise money,” said Yao Yu, founder of credit-rating startup RatingDog (Shenzhen) Information Technology Co. Investors such as insurers and securities firms like the product’s relatively stable, higher long-term yields — especially in China’s current low interest-rate environment, he added.

These products typically generate annual distribution rates of around 4% to 8%, backed by rental income that can extend for several decades, according to people familiar with the matter, who asked not to be identified discussing private information.

In many cases, the potential returns can be more than double the average yield on China’s onshore junk-rated corporate bonds, the people said.

The Shanghai Stock Exchange, where about 90% of these securities are listed, didn’t immediately respond to questions about its outlook for these products or what it’s doing to support investors and drive further growth.

China's Junk Bond Yield Sinks to Record Low

This site can’t be reached

www.bloomberg.com is currently unreachable.

ERR_CONNECTION_FAILED

null

www.bloomberg.com is currently unreachable.

Source: ChinaBond, Bloomberg

Many of the firms tapping this market have never sold public bonds in the nation’s domestic market. Private-sector companies and those in newer sectors have struggled to issue debt onshore, where investors tend to favor state-backed borrowers.

Data-center operators are drawn to it because building facilities to support artificial intelligence and cloud computing requires huge upfront investment, but the returns are generated slowly over time. That creates a mismatch between funding needs and cash flows.

The appeal extends beyond the tech sector.

Utilities, including toll-road operators and power producers, remain the largest issuers, reflecting their outsized role in the economy. They are followed by commercial property owners and data-center operators. Local government financing vehicles — investment arms that fund public infrastructure — have also been active as they’re seeking new funding channels after Beijing tightened scrutiny of local borrowing.

Wuhan Weineng Battery Asset Management Co., an electric-vehicle battery recycling and energy storage firm backed by NIO Inc. and Contemporary Amperex Technology Co., was among the latest borrowers to tap the market, raising 501 million yuan earlier this year.

A unit of distressed builder Seazen Holdings Co. priced 616 million yuan of such notes late last year and is applying to issue more. The company told investors it aims to raise as much as 8 billion yuan through these securities in 2026.

“Fixed-income investors in China have long been struggling in a low interest-rate environment, while issuers have been actively exploring diverse financing channels, so it’s a perfect match,” said Jerry Fang, managing director for structured finance ratings with S&P Global Ratings.

Broad Appeal

Holding-type real asset-backed securities' sales by sectors

Source: <a href="http://Cnabs.com" rel="nofollow">Cnabs.com</a>, Bloomberg

Holding-type ABS offers an alternative as it allows issuers to borrow against tangible assets rather than relying purely on their corporate credit profiles. China Securities Co. estimates that total sales may reach 700 billion yuan in the medium to long term.

Despite the rapid growth, some investors remain cautious as the market is still in its early stages, with limited liquidity and few clear exit options.

“Investors should be prepared for a long-holding period and it’s unclear how sustainable the rental income from these assets will be over time,” said Li Gen, founder of Beijing G Capital Private Fund Management Center.

Still, analysts expect demand to continue growing, especially as China’s digital infrastructure expands. Project financing and ABS have been important funding channels for data centers globally, according to S&P’s Fang. With their capital spending set to increase sharply, operators will seek more convenient financing options.

“The market will most likely continue to boom going forward,” he added.

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-14/china-s-data-centers-are-plugging-into-reit-style-financing-wave)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[

Before it's here, it's on the Bloomberg Terminal

LEARN MORE

](https://archive.is/o/OauPd/https://www.bloomberg.com/professional/solution/bloomberg-terminal-learn-more/)

Up Next

Anthropic Releases AI Model With Weaker Cyber Skills Than Mythos

In this Article

[

GDS Holdings Ltd

42.97

0.16%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/GDS:US)

Follow

[

Vnet Group Inc

9.27

2.89%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/VNET:US)

Follow

[

Contemporary Amperex Technology Co Ltd

446.33

1.69%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/300750:CH)

Follow

[

NIO Inc

6.87

6.84%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/NIO:US)

Follow

[

Seazen Holdings Co Ltd

14.59

2.60%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/601155:CH)

Follow

Top Reads

[

Inside Dixon Technologies Ltd.'s Smartphone Manufacturing Factory and New Factory Construction

China’s Control Over Tech Is Threatening India’s Manufacturing Dreams

by Alisha Sachdev

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-14/china-s-controls-on-battery-ev-tech-hurt-india-s-manufacturing-ambitions)[

TOPSHOT-US-POLITICS-TRUMP

The Iran War Has Finally Shattered America’s World

by Hal Brands

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/features/2026-04-12/iran-war-trump-has-finally-shattered-america-s-world)[

How Anthropic Learned Mythos Was Too Dangerous for the Wild

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/how-anthropic-discovered-mythos-ai-was-too-dangerous-for-release)[

The $10 Billion Startup Training AI to Replace the White-Collar Workforce

by Tom Foster

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/ai-company-hiring-on-linkedin-wants-to-train-your-replacement-at-work)

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Subscribe to read

1 Share
  • Satellite Acquisition: The Iranian Revolutionary Guard Corps Procured A Chinese Spy Satellite Designated TEE 01B In Late 2024
  • Imaging Resolution: This Spacecraft Features Half Meter Resolution Technology Providing A Substantial Capability Upgrade Over Existing Iranian Systems
  • Operational Control: Emposat A Beijing Based Firm Provides The Ground Station Infrastructure Necessary For Iran To Direct The Satellite Operations
  • Targeting Activities: Iranian Military Commanders Tasked The Satellite To Survey Multiple United States Military Bases Including Locations In Saudi Arabia And Jordan
  • Strategic Context: Imagery Was Collected Immediately Before And After Strikes Conducted By Iran Against Military Sites Throughout The Middle East
  • Technological Origin: Earth Eye Co A Chinese Commercial Entity Executed An In Orbit Transfer To Convey The Satellite Control To The Iranian Military
  • Security Implications: This Arrangement Permits The Iranian Military To Utilize Global Distributed Ground Networks To Evade Potential Kinetic Strikes On Domestic Infrastructure
  • Official Denials: The Chinese Ministry Of Foreign Affairs Categorically Denied Reports Linking The Satellite Transfer To The Ongoing Regional Conflict

Iran secretly acquired a Chinese spy satellite that gave the Islamic republic a powerful new capability to target US military bases across the Middle East during the recent war, according to a Financial Times investigation.

Leaked Iranian military documents show the satellite, known as TEE-01B, was acquired by the Islamic Revolutionary Guard Corps’ Aerospace Force in late 2024 after it was launched into space from China.

Time-stamped coordinate lists, satellite imagery and orbital analysis show that Iranian military commanders later tasked the satellite to monitor key US military sites. The images were taken in March before and after drone and missile strikes on those locations.

TEE-01B was built and launched by Earth Eye Co, a Chinese company that says it offers “in-orbit delivery”, a little-known export model under which spacecraft launched in China are transferred to overseas customers after reaching orbit.

As part of the agreement, the IRGC was granted access to commercial ground stations operated by Emposat, a Beijing-based provider of satellite control and data services with a global network spanning Asia, Latin America and other regions.

The use of a Chinese-built satellite by the IRGC during a war where Tehran has repeatedly targeted its neighbours with missiles and drones is likely to be highly sensitive across the region. China is the largest trading partner of the Gulf countries and is the largest buyer of their oil.

The logs show that the satellite captured images of Prince Sultan Air Base in Saudi Arabia on March 13, 14 and 15. On March 14, US President Donald Trump confirmed US planes at the base had been hit. Five US Air Force refuelling planes were damaged.

The satellite also conducted surveillance of the Muwaffaq Salti Air Base in Jordan and locations close to the US Fifth Fleet naval base in Manama, Bahrain, and Erbil airport, Iraq, around the time of IRGC-claimed attacks on facilities in those areas.

Other areas surveilled by the satellite included Camp Buehring and Ali Al Salem air base in Kuwait, the Camp Lemonnier US military base in Djibouti and Duqm International Airport in Oman. Gulf civilian infrastructure monitored included the Khor Fakkan container port area and the Qidfa power and desalination plant in the United Arab Emirates, as well as the Alba facility in Bahrain — one of the world’s largest aluminium smelters.

“This satellite is clearly being used for military purposes, as it is being run by the IRGC’s Aerospace Force and not Iran’s civilian space programme,” said Nicole Grajewski, an expert on Iran at Sciences Po university.

“Iran really needs this foreign-provided capability during this war, as it allows the IRGC to identify targets ahead of time and check the success of its strikes,” she added.

TEE-01B is capable of capturing imagery at roughly half-metre resolution, comparable to high-resolution commercially available western satellite imagery. It represents a significant upgrade on Iran’s domestic capabilities and would allow analysts to identify aircraft, vehicles and changes to infrastructure.

By contrast, the IRGC Aerospace Force’s previously most advanced military satellite — the Noor-3 — was estimated, based on Iranian claims, to capture imagery at about 5 metres resolution, an improvement on the Noor-2 system’s 12-15 metre imagery but still about an order of magnitude less precise than the Chinese-built satellite and insufficient to identify aircraft or monitor activity at military bases.


TEE-01B greatly improves Iran’s surveillance capabilities

Reported resolution of selected Iranian satellites

TEE-01BNoor-3PayaZafar-2

0.5m5m10m15m

Satellite imagery shows military aircraft at Prince Sultan Air Base in Saudi Arabia on March 5 Planet Labs; FT reporting

Earth Eye Co says on its website that it has carried out one “in-orbit” transfer to an unnamed country that was part of China’s Belt and Road Initiative. Iran joined Belt and Road in 2021.

The company says on its website that the satellite was intended to be used for “agriculture, ocean monitoring, emergency management, natural resource supervision, and municipal transportation”.

In September 2024, the IRGC Aerospace Force — which oversees Iran’s ballistic missile, drone and space programmes — agreed to pay about Rmb250mn ($36.6mn) to acquire control over the satellite system, according to the documents seen by the FT.

A gold satellite covered with dark solar panels, standing upright on a stand in front of a dramatically lit background.

An image of the satellite Earth Eye says was transferred to ‘a Belt and Road country’ © Earth Eye Co

The renminbi-denominated agreement, signed by a brigadier general in the IRGC Aerospace Force, breaks down costs, including the satellite and its launcher, technical support, data infrastructure and services provided by a “foreign counterparty”.

Under the agreement, Emposat provides the IRGC with the software and ground network to run the satellite over its lifespan. These would send commands, receive telemetry and imagery, and allow the IRGC to direct the satellite’s operations from anywhere in the world.

“This amounts to a dispersion strategy for Iran’s space assets,” said Jim Lamson, a former CIA analyst focused on Iran and a senior research associate at the James Martin Center for Nonproliferation Studies.

“Iran’s satellite ground stations, which were hit in 2025 and 2026, can be hit very easily by missiles from a thousand miles away. You can’t just hit a Chinese ground station located in another country,” he added.

Israel’s military has said it has struck multiple space and satellite-related targets inside Iran during the current conflict, including the main research centre of the Iranian Space Agency in mid-March.

The IDF said the centre was being used to develop military satellites and intelligence collection, as well as “directing fire toward targets across the Middle East”.

Lamson said the TEE-01B satellite significantly expanded Iran’s ability to monitor US military assets.

“Iran has human intelligence assets around the region surveilling US military bases,” he said. “So if you are an Iranian military planner having a satellite like this available to combine with that, and also with Russian satellite imagery, is a powerful tool.”

LOADING

Video description

The TEE-01B satellite was launched from the Jiuquan Satellite Launch Center in northwest China in June 2024

The TEE-01B satellite was launched from the Jiuquan Satellite Launch Center in northwest China in June 2024 © Reuters

Iran’s expanding use of foreign satellite capabilities comes against a backdrop of deepening co-operation with Russia, which has launched several Iranian satellites in recent years.

China has sought to position its commercial space sector as civilian, even as its technologies are increasingly used in dual-use contexts.

US officials have been closely monitoring Chinese satellite companies believed to be supporting actors in the Middle East that threaten US security. The FT reported last year that Chang Guang Satellite Technology, a commercial group with ties to the Chinese military, provided satellite imagery to the Iran-backed Houthi rebels in Yemen to help them target US warships and international ships in the Red Sea.

Emposat, the Chinese company providing the ground infrastructure for the system, has been identified in a report by the House China committee as having close ties to China’s People’s Liberation Army Aerospace Force, including personnel linked to key satellite launch command centres.

While Emposat is a commercial company, it was founded by Richard Zhao, who spent 15 years working at the China Academy of Space Technology, a government-run organisation. Several of the top executives and engineers at Earth Eye also have connections to some of the Chinese universities that are dubbed the “seven sons of national defence” because of their close collaboration with the People’s Liberation Army, according to the group’s website.

“Emposat is a rising star in China’s commercial space sector, but it’s still a product of the state and military establishment,” said Aidan Powers-Riggs, an expert at the CSIS think-tank who has conducted research on the group. “It was founded by veterans of China’s state-run space programme and bankrolled by investment from national military-civil fusion funds.”

The revelation about the satellite contract comes as the US is more broadly concerned about Chinese help for Iran. Dennis Wilder, the former head of China analysis at the CIA, said China has a history of providing Iran with weapons as part of a pragmatic strategy to influence the Islamic republic on other issues, including in the past sending Silkworm anti-ship missiles that were used to obstruct shipping in the Strait of Hormuz.

One person familiar with the situation said the US had seen indications that China was considering providing Iran with the kind of shoulder-fired missiles that Iran recently used to shoot down an American F-15 fighter jet. The CIA declined to comment on the situation, which was first reported by CNN.

While Emposat operates commercially, analysts say such links underline the blurred boundary between civilian and military space capabilities in China.

“There is no way that any Chinese company could do something like launch a satellite without somebody in the administration giving it the go-ahead,” said one former senior western intelligence official. “I think it’s been very clear for some time that China has been helping the Iranians with intelligence, but trying to keep the hand of government hidden.”

Asked about the results of the FT’s investigation, China’s Ministry of Foreign Affairs said: “The relevant reports are untrue. This is a war that should never have happened, and everyone is clear about its origins. Recently, certain forces have been keen to fabricate rumours and maliciously link them to China. China firmly opposes this kind of ill-intentioned conduct.”

China’s Ministry of Commerce, Earth Eye and Emposat did not respond to requests for comment.

The CIA declined to comment. The White House did not comment specifically on the connection between Emposat and the IRGC. But a spokesperson referred to comments President Donald Trump made at the weekend when he warned that China would face “big problems” if it provided Iran with air defence systems.

Additional reporting by Chris Campbell in London and Joe Leahy in Beijing

Methodology

The FT analysed TEE-01B’s orbit using public US Space Force tracking data and found the satellite was in position to observe the locations at the times listed in documents shared with the FT. In each case, the timestamps, coordinates and sensor angles were consistent with the satellite’s position.

Government statements and media reports show many of the locations were hit in the days before or after the satellite’s passes. The FT identified potential damage using radar imagery from the European Space Agency’s Sentinel-1 satellite and medium-resolution imagery from Sentinel-2.

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Scientists reverse brain aging, with a nasal spray – Texas A&M Stories

1 Share
  • Neuroinflammaging Concept: Chronic Brain Inflammation Causes Cognitive Decline And Memory Loss During Aging Processes
  • Innovative Nasal Spray: Texas AM Researchers Developed A Noninvasive Therapeutic Delivery Method For Treating Brain Aging
  • Treatment Mechanism: Extracellular Vesicles Utilize Nasal Absorption To Deliver Regulatory MicroRNA Directly Into Brain Tissue
  • Cellular Restoration: The Therapy Successfully Recharges Mitochondrial Function While Suppressing Inflammatory Signaling Pathways Like NLRP3
  • Observed Outcomes: Clinical Models Demonstrated Significant Improvements In Memory Retention And Environmental Adaptability Within Weeks
  • Universal Applicability: Research Findings Indicate That The Intranasal Treatment Maintains Consistent Effectiveness Across Both Biological Sexes
  • Future Potential: Potential Applications Extend Beyond General Aging To Include Recovery Opportunities For Human Stroke Survivors
  • Patent Milestones: Researchers Have Filed A United States Patent To Facilitate The Translation Of Lab Discoveries Into Clinical Practice

New therapy is turning back the clock in aging brains, healing inflammation, restoring memory and reshaping the future of brain age-related therapies

Credit: Getty Images

Picture this: your brain is a high-performance engine. Over decades, it doesn’t just wear down, it also starts to run hot.

Tiny “fires” of inflammation smolder deep within the brain’s memory center, creating a persistent brain fog that makes it harder to think, form new memories or even adapt to new environments, all the while increasing the risk to disorders like Alzheimer’s disease.

Scientists call this slow burn “neuroinflammaging,” and for decades it was thought to be the inevitable price of growing older.

Until now.

A landmark study from researchers at the Texas A&M University Naresh K. Vashisht College of Medicine suggests the inflammatory tide responsible for brain aging and brain fog might actually be reversible. And the solution doesn’t involve brain surgery, but a simple nasal spray.

Led by Dr. Ashok Shetty, university distinguished professor and associate director of the Institute for Regenerative Medicine, along with senior research scientists Dr. Madhu Leelavathi Narayana and Dr. Maheedhar Kodali, the team developed a nasal spray that, with just two doses, dramatically reduced brain inflammation, restored the brain’s cellular power plants and significantly improved memory.

The most surprising part? It all happened within weeks and lasted for months.

The findings, published in the Journal of Extracellular Vesicles, could reshape the future of neurodegenerative therapies and may even change how scientists think about brain aging itself.

ashok shetty working in lab

Brain age-related diseases like dementia are a major health concern worldwide. What we’re showing is brain aging can be reversed, to help people stay mentally sharp, socially engaged and free from age-related decline.

Dr. Ashok K. Shetty University Distinguished Professor Texas A&M University Naresh K. Vashisht College of Medicine

Brain fog to brain focus, the future of cognitive therapy

The implications of this research could be nothing short of revolutionary.

“As we develop and scale this therapy, a simple, two-dose nasal spray could one day replace invasive, risky procedures or maybe even months of medication,” Shetty said.

The societal impact could be just as profound. In the United States alone, new dementia cases are projected to double over the next four decades, from about 514,000 in 2020 to about 1 million in 2060.

“The trend signals a pressing need for policies and innovative interventions that can minimize both the risk and severity of neurodegenerative disorders like dementia,” Shetty said.

The study also hints at broad applicability, working equally effectively across both genders — a rare outcome in biomedical research.

“It’s universal,” Shetty said. “Treatment outcomes were consistent and similar across both sexes.”

One day, the approach could even help stroke survivors rebuild lost brain function, or slow — even reverse — the effects of cognitive aging in humans.

“Our approach redefines what it means to grow old,” Shetty said. “We’re aiming for successful brain aging: keeping people engaged, alert and connected. Not just living longer, but living smarter and healthier,” Shetty said.

Rewiring the brain from the inside out

At the heart of this groundbreaking development are millions of microscopic biological parcels known as extracellular vesicles (EVs). They act like delivery vehicles, carrying powerful genetic cargo called microRNAs.

“MicroRNAs act like master regulators,” Narayana said. “They help modulate and regulate many gene and signaling pathways in the brain.”

But the delivery route is just as important as the cargo.

Packed into a nasal spray, the tiny EVs bypass the brain’s protective shield and travel directly into brain tissue, where they are absorbed.

Scenes from the Shetty lab, with two students mixing two solutions into a centrifuge.

In Shetty’s lab, researchers develop an innovative nasal spray targeting brain aging.

Credit: Texas A&M University Division of Marketing and Communications

“The mode of delivery is one of the most exciting aspects of our approach,” Kodali said. “Intranasal delivery allows us to reach, and treat, the brain directly without invasive procedures.”

Once absorbed into the brain’s resident immune cells, the microRNAs suppress systems, like NLRP3 inflammasome and the cGAS–STING signaling pathways, known to drive chronic inflammation in aging brains.

At a cellular level, the treatment recharged neuronal mitochondria, or the power plants that live inside the brain’s cells.

By recharging these cellular power plants, the therapy didn’t just clear brain fog, it physically improved the brain’s ability to process and store information.

“We are giving neurons their spark back by reducing oxidative stress and reactivating the brain’s mitochondria,” Narayana said.

Behavioral tests confirmed the biology. Models treated with the nasal spray showed remarkable improvements in not only recognizing familiar objects but also detecting new objects and changes in their environment, a sharp contrast to the control.

“We are seeing the brain’s own repair systems switch on, healing inflammation and restoring itself,” Shetty said.

While further research is needed, Shetty and his team have already filed a U.S. patent for the therapy, marking a milestone in what could become a breakthrough for brain aging treatments.

Behind the breakthrough

Breakthroughs like the one led by Shetty highlight Texas A&M as a research powerhouse, where national and global research priorities help shape the next generation of innovative solutions.

“We aren’t just trying to understand the biological mechanisms, we are translating and developing our findings into real-world therapies that could make a difference,” Shetty said.

Scenes from the Shetty lab, with all researchers standing and facing the camera for an image.

With support from the NIA, discoveries like Shetty’s highlight Texas A&M’s role as a leader in research, where global and national priorities inspire the next wave of innovation.

Credit: Texas A&M University Naresh K. Vashisht College of Medicine

Backed by the National Institute on Aging (NIA), the Texas A&M team pooled collaborative knowledge, expertise and resources to turn a simple nasal spray into a therapy with the potential to reframe how scientists think about brain aging.

“Our partnership with the NIA is very important,” Shetty said. “This kind of work requires resources and the right people to tackle problems and develop solutions that could change lives.”

Ultimately, while the brain’s engine may sputter with age, scientists are now learning how to reignite it, sparking a new era of cognitive health and showing that the clock on brain aging might not just be paused, it can be turned back.

More information: Intranasal Human NSC‑Derived EVs Therapy Can Restrain Inflammatory Microglial Transcriptome, and NLRP3 and cGAS‑STING Signalling, in Aged Hippocampus, Journal of Extracellular Vesicles 15(2): e70232 (2026).

DOI: 10.1002/jev2.70232

Journal information: Journal of Extracellular Vesicles

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Anthropic shifts enterprise billing to usage-based pricing

1 Share
  • Pricing Structure Shift: Anthropic has transitioned its enterprise plan from flat seat fees to usage-based billing, requiring customers to pay for Claude, Claude Code, and Cowork at standard API rates.
  • Service Reliability Issues: Claude reported an API uptime of 98.95% over a recent 90-day period, significantly lower than the 99.99% industry standard, leading some enterprise clients to migrate to alternative providers.
  • Compute Resource Constraints: Rising costs and supply shortages for hardware, specifically Blackwell GPUs, alongside increasing power grid demands, have necessitated the removal of subsidized flat-rate access.
  • Agentic Workload Impact: The adoption of automated agentic workflows has caused a substantial surge in token consumption, rendering previous fixed-price subscription models financially unsustainable for the provider.
  • Industry Wide Transition: Reflecting broader market conditions, other major AI organizations are similarly moving away from flat-fee subscriptions toward usage-metered pricing models for compute-heavy services.

Anthropic has restructured its enterprise plan to bill Claude, Claude Code, and Cowork usage separately from seat fees, moving its largest business customers to per-token pricing at standard API rates, according to the company's updated enterprise help documentation. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms. The Information first reported the shift, describing it as tied to a deepening compute crunch that will raise bills for heavy business users significantly.

That is the official version of the story. The unofficial version starts with David Hsu.

David Hsu prefers Claude Opus 4.6. He said so publicly. Then he moved his company off it.

Hsu runs Retool, the software platform. He told the Wall Street Journal that Anthropic's model was better, but the service kept dying. His customers could not ship code when Claude choked. So he picked OpenAI, the inferior option, because the inferior option stayed up.

That is the tell.

Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, according to WSJ reporting. Established cloud providers commit to 99.99. One extra nine does not sound like much until you translate it. At Anthropic's rate, that is roughly 92 hours of downtime a year. At the 99.99 standard, 53 minutes. For an enterprise buyer, that is the difference between a tool and a liability.

And then, quietly, Anthropic changed how it bills.

The Breakdown

  • Anthropic quietly moved enterprise seats to usage-based billing, unbundling Claude, Claude Code, and Cowork from flat fees.
  • Claude API uptime hit 98.95% over 90 days ending April 8, far below the 99.99% standard, pushing Retool founder David Hsu to switch to OpenAI.
  • Revenue tripled from $9B to $30B in four months, but session caps, cache TTL cuts, and OpenClaw metering show the subsidy bleeding.
  • Expect every major AI provider running agentic workloads to move to usage-based billing within six months.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

What Anthropic stopped pretending

The enterprise help center now reads like a concession letter dressed as documentation. "The seat fee only covers access to the platform and doesn't include any usage," it says. "All usage across Claude, Claude Code, and Cowork is billed separately at standard API rates, based on what your team actually consumes." Anthropic is not framing the change as a price hike. The direction is unmistakable anyway. Run rate at the end of 2025 was $9 billion. Run rate today is $30 billion. And the company's message to its biggest customers is: budget more for compute.

That is not a typo. Revenue tripled in four months. The response was a billing restructure, not a victory lap.

The open bar was always a subsidy

For about eighteen months, the AI industry sold a story. The story was not that the models were good. The story was that the pricing was stable.

A $20 Claude Pro subscription gave you access to one of the best coding models in the world, running on compute that cost Anthropic significantly more than you paid. Power users always understood this. The arithmetic was never a secret. A single engineer on Claude Code burns through far more inference than a Pro fee could ever cover. The subscription was an open bar. Anthropic was buying the drinks.

Open bars work until somebody shows up thirsty. In this case, the thirsty somebody is the agent.

Agentic workflows do not sip. They chain tools across steps and run loops without asking. They spawn subagents carrying their own contexts. Cached tokens burn by the hundred thousand. One engineer running Claude Code overnight can consume the token budget of 200 casual chat users. OpenAI, running the same math on its own platform, watched token usage jump from 6 billion per minute in October to 15 billion per minute by late March. That is not growth. That is a break in the load curve.

Get Implicator.ai in your inbox

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers onto three-year contracts. Bank of America expects compute demand to outstrip supply through 2029. PJM, the grid operator for the eastern US, warned Friday that it needs 15 gigawatts of additional power tied to AI by early 2027.

The subsidy stopped being theoretical. It started bleeding.

The open bar closes

Watch the pattern. In late March, Anthropic quietly tightened five-hour session limits for Pro and Max users during peak hours, 5 a.m. to 11 a.m. Pacific, weekdays. About seven percent of users started hitting caps they had not hit before, per the company's own disclosure. Claude Code's prompt-cache time-to-live shifted back from one hour to five minutes in early March, driving up quota burn for long coding sessions. OpenClaw, a popular agent framework, got pulled out of the flat-fee bundle on April 4 and moved into usage-metered billing, with heavy users facing bills potentially 50 times higher.

Each move was defended as a "product change" or an "optimization." None was framed as a price hike. Put them together and the shape is obvious. Anthropic is lifting features out of the subscription, welding meters onto each one, and sending customers the difference.

You can feel the institutional anxiety in the public responses. Anthropic engineers denied, on X and GitHub, that the company had degraded Claude. They are not wrong about the model. They are cornered on the framing. The model is the same. The economics underneath it are not.

One AMD senior director filed a data-heavy GitHub complaint on April 2, analyzing 6,852 Claude Code session files to argue Claude had stopped reasoning as deeply. Anthropic's Claude Code lead walked through her analysis, conceded that thinking-effort defaults had been lowered on March 3 to "the best balance across intelligence, latency and cost," and explained the rest as UI changes. Translate: we dialed down the compute and hoped nobody would notice.

People noticed. That is why the usage-based migration is happening in public now. Enterprise buyers will not accept a silent shrinkflation. They want the meter visible, the billing legible, and someone else to blame when the monthly number gets ugly.

Who pays for the buildout

The shift does real work for Anthropic. It transfers compute cost from balance sheet to invoice. It lets the company keep signing million-dollar accounts without needing to pre-buy the GPU capacity to serve them flat-rate. It gives Anthropic a clean story for Wall Street. Every new revenue dollar is now backed by a compute dollar, metered and paid.

It breaks something else, though. Claude's growth engine was individual developers falling in love with Claude Code on their own dime, dragging it into their startups, then pushing employers to buy enterprise seats. That pipeline runs on a cheap Pro plan with enough headroom to experiment. Tighten the plan, meter the agents, add surprise quota math, and you poison the upstream.

Business Insider spoke with three of those users last week. One restructured his entire workday around limit resets. Another now breaks a single project into four micro-chats to conserve tokens. A third simply stops working when the cap hits, because manual coding feels pointless after Claude has spoiled him. The affection is intact. The relationship is not.

The crack nobody is naming

Here is the part Anthropic will not say out loud. Axios went hunting for a historical comparison and came back empty. Google's search-advertising ramp was the previous record. Anthropic covered nearly four times that ground in a single quarter. Snowflake took a decade to reach a billion in run rate. Anthropic added twenty-nine billion in roughly a year. More than 1,000 customers now pay over a million dollars a year for Claude.

None of those numbers mean anything if the service cannot stay up. 98.95 percent is the crack. Enterprise buyers have been trained by twenty years of cloud discipline to treat that figure as disqualifying, and some are already voting with their wallets. Ramp card data shows Anthropic gaining on OpenAI in business spend, yes. That data lags the service degradation. The next three months test whether customer affection for Claude outruns exhaustion with its downtime.

Expect two things from here. First, every major AI provider running agentic workloads moves to usage-based enterprise billing within six months. OpenAI already started, shifting Codex from flat-message to token metering in early April. GitHub tightened Copilot limits on April 10. Windsurf swapped credits for daily quotas. The flat-fee era is over. The memo is circulating.

Second, Anthropic keeps telling two stories at once. The growth story for investors goes like this. Thirty billion dollars, 1,000 million-dollar accounts, a three-year curve that would embarrass Rockefeller. The rationing story for users lives somewhere else entirely. Capacity tightens during peak hours. Effort defaults quietly dropped in March. Session caps hit sooner than they used to. OpenClaw got its own meter. Both stories are true. Neither is sustainable without the other eventually catching up.

David Hsu made his call already. He kept the inferior model because it stayed up. That is the verdict that should scare Anthropic more than any benchmark screenshot on X. It is not a performance problem. It is a trust problem. You cannot patch trust with a pricing page.

Frequently Asked Questions

What actually changed in Anthropic's enterprise pricing?

The enterprise plan's seat fee now only covers platform access. Usage across Claude, Claude Code, and Cowork is billed separately at standard API rates based on actual consumption. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms.

Why did Retool's founder switch from Claude to OpenAI?

David Hsu told the Wall Street Journal he preferred Anthropic's Opus 4.6 model, but the service kept going down. Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, compared to the 99.99 percent standard that established cloud providers maintain. That gap translates to roughly 100 hours of downtime a year versus 50 minutes.

What is the compute crunch behind the pricing shift?

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers into three-year contracts. Bank of America expects demand to outstrip supply through 2029. PJM, the eastern US grid operator, is looking for 15 gigawatts of additional power tied to AI by early 2027.

What did Anthropic change about Claude Code sessions?

In late March, Anthropic tightened five-hour session limits for Pro and Max users during weekday peak hours of 5 a.m. to 11 a.m. Pacific. About 7 percent of users started hitting caps they had not hit before. Claude Code's prompt-cache time-to-live shifted from one hour back to five minutes in early March, driving up quota burn for long coding sessions.

Are other AI providers moving to usage-based billing too?

Yes. OpenAI shifted Codex from flat-message pricing to token metering in early April and introduced a new $100 Pro tier for compute-heavy coding. GitHub tightened Copilot limits on April 10. Windsurf replaced its credit system with daily and weekly quotas in March. The flat-fee subscription era for agentic workloads is effectively over.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

[

The Intelligence Arbitrage Is Closing

Implicator PRO Briefing / 24 Mar 2026   Pro Members Only AI token prices crashed 99% in three years. Six providers sell comparable intelligence at commodity rates. This analysis maps the fi

The Implicator

](https://www.implicator.ai/the-intelligence-arbitrage-is-closing/)

[

Seven Chips Ship. Seventy Percent Walk.

San Francisco | March 17, 2026 Jensen Huang held up a chip yesterday in San Jose. Then he held up six more. Nvidia's Vera Rubin platform ships seven processors as one organism, backed by $1 trillion

The Implicator

](https://www.implicator.ai/seven-chips-ship-seventy-percent-walk/)

[

Anthropic and Google circle a cloud pact measured in tens of billions

A compute-for-TPUs deal would anchor Anthropic on Google Cloud while keeping AWS firmly in the frame. A six-figure hourly compute bill meets a hyperscaler eager for flagship AI workloads. Anthropic i

The Implicator

](https://www.implicator.ai/anthropic-and-google-circle-a-cloud-pact-measured-in-tens-of-billions/)

Analysis

Share X / Twitter LinkedIn Email

Marcus Schuler

Marcus Schuler

San Francisco

Editor-in-Chief and founder of Implicator.ai. Former ARD correspondent and senior broadcast journalist with 10+ years covering tech. Writes daily briefings on policy and market developments. Based in San Francisco. E-mail: editor@implicator.ai

Read the whole story
bogorad
3 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories