Strategic Initiatives
12207 stories
·
45 followers

China's AI Giveaway Finally Pays. Thank the Agents.

1 Share
  • Strategic Monetization: Chinese Ai Firms Shifted From Open Access Models To Profitable Token Based Revenue Streams Through Agent Integration
  • Increased Consumption: Automated Agent Workflows Resulted In Substantially Higher Daily Token Usage Compared To Standard Human Chatbot Interactions
  • Pricing Power: Significant Increases In Api Costs By Industry Leaders Did Not Diminish Call Volumes Demonstrating Inelastic Demand
  • Structural Reorganization: Major Tech Providers Consolidated Disparate Divisions Into Specialized Token Hubs Focused On Commercializing Token Consumption
  • Licensing Changes: Commercial Restrictions On Previously Open Models Reflect A Tactical Pivot Toward Securing Long Term Business Viability
  • Distribution Moats: Cloud Giants Integrating Models Across Existing Consumer Ecosystems Gained A Competitive Advantage Over Pure Play Labs
  • Market Risks: Potential Future Disruptions From Aggressive Pricing Strategies By Competitors Like Deepseek Threaten To Stabilize Or Lower Industry Margins
  • Global Impact: Increased Chinese Market Revenue And Operational Scale Have Shifted The External Focus To Concerns Regarding Competitive Pricing And Regulatory Sanctions

Last November at the University of Hong Kong, Joe Tsai tried to explain why Alibaba was giving away its flagship AI model for free. The room of students wanted a punch line. Tang Heiwai, the associate dean, pressed him: how do you guys make money by being so generous? Tsai smiled. "We don't make money from AI," he said. "That's the answer."

Five months later, the punch line arrived.

Zhipu AI raised API prices 83 percent in the first quarter of 2026. Call volumes jumped 400 percent anyway. Tencent Cloud hiked pricing on its Hunyuan series by more than 400 percent. Moonshot's Kimi K2.5 earned more in roughly 20 days than the company made in all of 2025, according to Voice of Context. Alibaba reorganized its entire AI operation into a single unit called the Token Hub, with CEO Eddie Wu writing a letter whose only recurring word was tokens.

The giveaway finally found a cash register. It took an AI agent to install it.

The Argument

  • Chinese open-source AI was a loss leader waiting for something to monetize. Agents turned out to be it.
  • Zhipu raised API prices 83 percent in Q1 2026 and call volumes still jumped 400 percent. Demand stopped responding to price.
  • The licenses are quietly closing. MiniMax restricted commercial use, Alibaba reorganized into a Token Hub, Z.ai and Baidu are hiking prices.
  • Cloud giants with distribution win. Pure-play labs without a moat look less durable than their share prices suggest.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

The meter that never spun

For two years, Chinese open-source AI was a loss leader in search of a profitable side dish. Alibaba's Qwen hit nearly a billion cumulative downloads. DeepSeek's R1 reset global pricing. MiniMax and Zhipu rode the wave to blockbuster Hong Kong listings. Every model was cheap or free. Every margin was thin.

That was the plan. The formal name is commoditizing your complement. Make one layer of the stack effectively worthless, then charge for the adjacent layer that users now depend on. Google did it with Android. Apache did it with servers. Alibaba did it with Qwen, expecting to monetize cloud the way Tsai's hotel metaphor predicted: free room, paid minibar.

The minibar stayed empty.

Alibaba Cloud's computing margins still sit in the single digits, according to the company's latest earnings. OpenAI and Anthropic book 40 to 50 percent on closed models. Daniel Yue, a professor at Georgia Tech's business school, calls this the price of commoditizing yourself too well. When every model is free and roughly equivalent, nobody has pricing power. Not even you.

Every token call looked more or less the same through that period. A user typed a prompt. A model produced a completion. The interaction ended. Even heavy users were buying one-turn conversations, then walking away. That usage pattern is poison for anyone hoping to charge premium prices. Switching costs are near zero. Volumes are small per user. Benchmark differences of three or six months barely justify moving inference between providers. Zhang Jiang, a consultant who advises Chinese AI startups, told the South China Morning Post that Chinese firms were stuck in a volume-driven race instead of a value-driven one. Abundance was the only lever they had left.

The industry had built distribution without a way to charge for it. For years, nothing downstream had pricing power.

What agents rewired

OpenClaw changed the unit of sale.

An agent running a research task does not call a model once. It plans. It decomposes. It calls the model again, reads the result, reasons about it, retries when things break. A single overnight session can burn token consumption that dwarfs an entire month of chatbot use by the same person. MiniMax's average daily token consumption on its M2 series climbed six-fold between December 2025 and February 2026. Zhipu CEO Zhang Peng told a recent industry event that users now leave OpenClaw running while they sleep, an intensive process he described as potentially boosting demand exponentially.

That is not a marginal shift. That is a different product.

Briefings like this, straight to your inbox

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Agents create something chatbots never did: lock-in. Once an agent is configured with a specific model's tool-calling format, memory structure, and prompt conventions, switching providers mid-workflow breaks things. You can argue the models are interchangeable. In practice, enterprises running agent workloads on top of Qwen or GLM-5 stay put, and our earlier reporting on Nvidia's open-weights strategy made the same point from a different angle: open is a wedge, not a gift.

What agents produce, in economic terms, is inelastic demand. Users can't easily switch. They need more tokens per task. That is the exact shape of a profitable market. Zhipu proved it the hard way. API prices up 83 percent in Q1. Call volume up 400 percent at the same time, according to Zhang's own earnings call. The math doesn't lie. Demand stopped responding to price.

When the licenses started changing

If open source was ever an ideological commitment, this would be the moment to double down. The mask started slipping instead.

MiniMax released its latest M2.7 model and amended its terms of use to prohibit commercial use without authorisation. The open-source community felt blindsided. Florian Brand, a San Francisco-based analyst of China's AI ecosystem, told SCMP that licensing restrictions could erode Chinese models' popularity among global developers given the wealth of alternatives. Daniel Yue argued the opposite, noting that such moves are not new in open-source software, just a sign that companies are trying to secure long-term commercial viability.

Alibaba and Z.ai released some of their newest models as closed-source at launch. Both, plus Baidu, hiked prices across models and cloud services. Last month, Alibaba reorganised five separate AI units, Tongyi Laboratory, Qwen, the Bailian MaaS platform, the Wukong enterprise agent arm, and an AI innovation group, into what it now calls the Alibaba Token Hub, reporting directly to Eddie Wu. "ATH is built around a single organising mission," Wu wrote in the announcement letter. "Create tokens, deliver tokens and apply tokens."

Read that sentence again. Not one mention of openness. Not one mention of the developer community. The word that appears three times is tokens.

Liu Liehong, the head of China's National Data Administration, formally christened the term at a March State Council press conference with a new Chinese word: ciyuan. Tokens are now, in Liu's phrase, the settlement unit linking technological supply with commercial demand. China's daily token calls rose from 100 billion at the start of 2024 to 140 trillion by March 2026, according to the National Data Administration. Chinese models held the top six spots on OpenRouter's global weekly rankings for the week ending April 5, with Qwen3.6-Plus setting a single-day record of more than 1.4 trillion tokens, according to OpenRouter data cited by People's Daily.

The strategy was never charity. It was patience.

Who this leaves standing

Cloud giants with distribution already win. Tencent's QClaw embeds OpenClaw inside WeChat's 1.3 billion users, according to China Briefing. Alibaba's Qwen agent reaches 300 million monthly actives across Taobao, Tmall, and Alipay. ByteDance's Doubao has 315 million chatbot users and a phone launched with ZTE in December. Distribution plus agent integration equals a meter per customer, spinning whenever the customer sleeps.

Pure-play labs without a distribution moat look less durable than their share prices suggest. MiniMax and Zhipu are up hundreds of percent from their IPO prices. Both are still bleeding. MiniMax posted a $250 million adjusted net loss on $79 million in 2025 revenue. Zhipu's total losses reached $680 million against $104.8 million in revenue, driven by R&D spending up 45 percent. They are effectively becoming suppliers to the cloud giants that can embed their models into hosted agent platforms. You can see why MiniMax amended its license. Cornered suppliers write harder contracts.

Developers outside China lose something too: the assumption that open source means permanently free. Kimi K2.5 charges $0.58 per million input tokens against Anthropic's Opus 4.5 at $5.00. Qwen dominates six of the top ten OpenRouter slots. But licenses can be rewritten. API prices can jump 400 percent in a quarter. The cheap option is a business decision, not a natural law. If you are building an enterprise workflow on Chinese open weights, you should price it that way.

Washington is watching a different angle of the same shift. The House Foreign Affairs Committee is weighing the Deterring American AI Model Theft Act, a bill from Representative Bill Huizenga that would sanction Chinese entities accused of adversarial distillation from US models. The underlying anxiety is not about openness. It is about pricing power. Cheap Chinese open weights undercut revenue the US labs need to pay for data centers and staff. Now that those weights have a profit mechanism attached, the fight gets harder.

What to watch

Tsai's hotel metaphor gave Alibaba a clean story. Open the room, charge for the service. For two years the rooms were open and nobody came to the lobby. Now a Shanghai engineer leaves an agent running overnight, a Shenzhen sole founder collects a Longgang district subsidy to automate her one-person company, and the meter finally spins.

Watch DeepSeek. Its next model is rumoured for release this month. R1 reset the entire market last year by crushing prices and forcing every competitor open. If DeepSeek does it again, the cash registers that just got installed could fall silent as fast as they filled up. If it does not, Chinese AI has its first real profit lever since the boom began.

The giveaway was always a bet on what came next. Agents are the next. Now you find out what the bet was worth.

Frequently Asked Questions

What changed for Chinese AI models in early 2026?

Agent-based usage via OpenClaw multiplied token consumption per session. Zhipu raised API prices 83 percent in Q1 2026 and saw call volumes rise 400 percent anyway. Tencent Cloud hiked Hunyuan pricing over 400 percent. Moonshot's Kimi K2.5 earned more in 20 days than Moonshot made in all of 2025. Chinese models finally gained pricing power their open-source strategy had been missing.

Why was Chinese open-source AI barely profitable before agents arrived?

Models were commoditized, switching costs were near zero, and most usage was one-turn chatbot calls. Alibaba Cloud's computing margins stayed in single digits while OpenAI and Anthropic booked 40 to 50 percent on closed models. With every Chinese model free and roughly equivalent, no provider had meaningful pricing power.

What is the Alibaba Token Hub?

A March 2026 reorganization that consolidated Tongyi Laboratory, Qwen, the Bailian MaaS platform, the Wukong enterprise agent unit, and an AI innovation group into a single business unit reporting to CEO Eddie Wu. Wu's announcement letter defined its mission as create, deliver, and apply tokens. The word openness did not appear.

Are Chinese AI companies still open source?

Selectively. MiniMax amended its M2.7 terms of use to prohibit commercial use without authorisation. Alibaba and Z.ai released some recent models closed-source at launch. Baidu and Alibaba both hiked prices across models and cloud services. The pattern suggests open-source was strategic, not ideological.

What could disrupt this pricing power?

A new DeepSeek release. The Hangzhou lab's R1 model last year reset global pricing and forced competitors to open up and cut prices. The next model is rumoured for this month. If DeepSeek ships with similar impact, the cash registers Chinese AI just installed could fall silent as quickly as they filled up.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

[

Zuckerberg's Bots Run Meta. Cursor's Bot Ran From Beijing.

San Francisco | Monday, March 23, 2026 Mark Zuckerberg is building an AI agent to help manage Meta. His employees already built their own, and the bots now talk to each other autonomously. One trigge

The Implicator

](https://www.implicator.ai/zuckerbergs-bots-run-meta-cursors-bot-ran-from-beijing/)

[

The Bots Built Their Own Reddit. 147,000 Signed Up in Three Days.

Moltbook is a social network where AI agents post, argue, and gossip about the humans who own them. It grew out of OpenClaw, the open-source personal assistant formerly known as Clawdbot, then Moltbot

The Implicator

](https://www.implicator.ai/the-bots-built-their-own-reddit-147-000-signed-up-in-three-days/)

[

Pactum CEO Warns Agents on Moltbook Have "Crossed the Rubicon," Calls for Immediate Guardrails

Kaspar Korjus has spent six years building AI agents that negotiate billion-dollar procurement contracts for Walmart, Rolls-Royce, and Nestlé. He co-founded Pactum, an Estonian startup that has invest

The Implicator

](https://www.implicator.ai/pactum-ceo-warns-moltbook-agents-have-crossed-the-rubicon-calls-for-immediate-guardrails/)

Analysis

Share X / Twitter LinkedIn Email

Harkaram Grewal

Harkaram Grewal

New Delhi

Freelance correspondent reporting on the India-U.S.-Europe AI corridor and how AI models, capital, and policy decisions move across borders. Covers enterprise adoption, supply chains, and AI infrastructure deployment. Based in New Delhi.

Read the whole story
bogorad
8 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

White House Works to Give US Agencies Anthropic Mythos AI - Bloomberg

1 Share
  • Federal Access To Mythos: The White House is establishing security protocols to provide federal agencies with access to Anthropic's restricted artificial intelligence, Mythos.
  • Cybersecurity Risk Concerns: Due to potential misuse by hackers for data theft or infrastructure sabotage, Anthropic has severely limited the public release of the high-capability Mythos model.
  • Software Engineering Update: Anthropic has launched Opus 4.7, an updated AI model optimized for coding tasks, which includes built-in safeguards to restrict high-risk cybersecurity applications.
  • Chinese Real Asset Securities: Data center operators in China are leveraging asset-backed securities to raise capital, with issuance volumes experiencing substantial growth over the previous year.
  • Financial Market Expansion: These real asset-backed financial products, which provide investors with yields linked to infrastructure performance, are increasingly utilized as an alternative funding channel to traditional borrowing.

Technology

|Cybersecurity

The White House in Washington.

Photographer: Stefani Reynolds/Bloomberg

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/white-house-moves-to-give-us-agencies-anthropic-mythos-access)

By Jake Bleiberg and Margi Murphy

April 16, 2026 at 5:54 PM UTC

Updated on

April 16, 2026 at 6:51 PM UTC

Takeaways by Bloomberg AI

  • The US government is preparing to make a version of Anthropic PBC's artificial intelligence model, Mythos, available to major federal agencies amid concerns it could increase cybersecurity risk.
  • The White House Office of Management and Budget is setting up protections to allow agencies to use Mythos, with more information to be provided "in the coming weeks".
  • Anthropic has limited the release of Mythos due to concerns that hackers could weaponize its capabilities to steal data or sabotage victim networks, and has briefed senior US government officials on the model's capabilities.

The US government is preparing to make a version of Anthropic PBC’s powerful new artificial intelligence model available to major federal agencies amid concerns that the tool could sharply increase cybersecurity risk, according to a memo reviewed by Bloomberg News.

Gregory Barbaccia, federal chief information officer of the White House Office of Management and Budget, told officials at Cabinet departments in an email Tuesday that OMB is setting up protections that would allow their agencies to begin using the closely guarded AI tool, Mythos.

The email doesn’t say definitively that the various agencies will get access to Mythos, nor does it provide a timeline for when it might come or how they might use it. It tells top technology and cybersecurity chiefs to expect more information “in the coming weeks.”

US officials have previously urged private sector organizations to use Mythos to improve their cybersecurity. The Treasury Department has been seeking access to Mythos in order to uncover its own software flaws, Bloomberg has reported.

Anthropic has only provided Mythos to a limited group of technology companies, financial firms and others, urging them to use it to assess their cybersecurity risk. The firm limited the release of Mythos amid concerns that hackers could weaponize its capabilities to steal data or sabotage victim networks.

Before its limited release of Mythos, Anthropic briefed senior officials across the US government on the model’s full capabilities, including both its offensive and defensive cyber applications, according to a company official who spoke on condition that they not be identified discussing the talks with government. The talks included staff at the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation, among others, the company official said, and Anthropic has continued to work with government on security issues arising from the model.

Read More: Why Anthropic Won’t Release Mythos to the Public

Barbaccia’s message was sent as leaders from Washington to Wall Street are grappling with the possibility that the model could make it dramatically easier for hackers to find ways to break into sensitive computer systems in industry and government.

“We’re working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place before potentially releasing a modified version of the model to agencies,” Barbaccia wrote in the email, which had the subject, “Mythos Model Access.”

A White House official said in an email that the government continues to work and engage with AI companies to ensure their models help secure critical software vulnerabilities. They didn’t answer specific questions on the matter.

Anthropic declined to comment.

Neither Anthropic nor the government said what, if any, federal agencies have gotten early access to Mythos.

Barbaccia’s email went to officials with the Department of Defense, Department of Treasury, Department of Commerce, Department of Homeland Security, Department of Justice and Department of State, among several other agencies.

The move to roll a version of Mythos out to the agencies signals the government’s continued interest in Anthropic’s tools despite a public spat and ongoing legal fight between the company and top Trump administration officials.

The Pentagon this year declared Anthropic a supply chain threat, under an authority normally reserved for foreign adversaries, over a dispute about over artificial intelligence safeguards. The company won a court order last month blocking a ban on government use of the technology, after Anthropic argued the move could cost it billions of dollars in lost revenue.

Within Anthropic, company leaders became worried the model could be a national security risk after testers were able to use Mythos to turn up the types of critical bugs that it would normally take the world’s best hackers to uncover. These concerns prompted the company’s limited release of the model.

It’s similarly set off alarms in various parts of the US government.

Among officials focused on national defense, the introduction of Mythos has created profound uncertainty about how to evaluate cybersecurity risk, a person familiar with the matter previously told Bloomberg. Equipping an individual hacker with the model, or similar AI tools, would likely be a transformation equivalent to turning a conventional soldier into a special forces operator, the person said.

On the day, Anthropic publicly disclosed Mythos’ existence, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened Wall Street leaders for a meeting in Washington to urge them to use the model to find weaknesses in their own systems.

— With assistance from Rachel Metz

(Updated with additional context throughout.)

Get Alerts for:

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/white-house-moves-to-give-us-agencies-anthropic-mythos-access)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[

Before it's here, it's on the Bloomberg Terminal

LEARN MORE

](https://archive.is/o/OauPd/https://www.bloomberg.com/professional/solution/bloomberg-terminal-learn-more/)

Up Next

Anthropic Releases AI Model With Weaker Cyber Skills Than Mythos

In this Article

[

Anthropic PBC

Private company

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/1892140D:US)

More From Bloomberg

[

European Political Community Summit

Spanish Premier Pedro Sánchez’s Wife Charged With Corruption

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-13/spanish-prime-minister-s-wife-charged-with-influence-peddling)[

Anthropic CEO And Co-Founder Dario Amodei

Anthropic’s Mythos Is a Wake-up Call For Everyone, Not Just Banks

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-15/anthropic-mythos-ai-is-a-wake-up-call-for-everyone-not-just-banks)[

Inside Third Man Pressing's Vinyl Record Plant As US Factory Gauge Climbs

US Factory Output Rebound in First Quarter Extended Beyond AI

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-16/us-factory-output-rebound-in-first-quarter-extended-beyond-ai)[

US Plans Record $100 Billion Bill Sale As Borrowing Needs Mount

US Treasury Seeking Access to Anthropic’s Mythos to Find Flaws

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-14/us-treasury-seeking-access-to-anthropic-s-mythos-to-find-flaws)[

TOPSHOT-FRANCE-TECHNOLOGY-SCIENCE-IT-AI-ANTHROPIC

With Mythos, Anthropic Deserves Support, Not a Blacklisting

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-13/anthropic-sets-the-right-ai-standard-with-mythos)

Top Reads

[

How Anthropic Learned Mythos Was Too Dangerous for the Wild

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/how-anthropic-discovered-mythos-ai-was-too-dangerous-for-release)[

Cómo Anthropic descubrió que Mythos era demasiado peligroso

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/como-anthropic-descubre-riesgos-del-modelo-de-ia-mythos)[

DIMSUM_CMS_DO_NOT_USE_Untitled

A Dozen Young Job Hunters on What It Takes to Get Hired

by Marin Cogan

](https://archive.is/o/OauPd/https://www.bloomberg.com/features/2026-job-hunt-stories/)[

Canva Leans Into AI to Defend Its $42 Billion Empire

by Olivia Poh

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/canva-australia-s-tech-unicorn-turns-to-ai-for-risky-transformation)

Technology

|AI

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/anthropic-unveils-updated-opus-model-aimed-at-advanced-coding)

By Rachel Metz

April 16, 2026 at 2:31 PM UTC

Updated on

April 16, 2026 at 2:54 PM UTC

Takeaways by Bloomberg AI

  • Anthropic PBC is introducing an updated version of its AI model, Opus 4.7, which is meant to be better at software engineering.
  • Opus 4.7 is less broadly capable than Mythos, including for cybersecurity uses, and Anthropic has implemented safeguards to detect and block prohibited or high-risk cybersecurity uses.
  • Anthropic is releasing Opus 4.7 while limiting the release of Mythos, which the company says can identify and exploit vulnerabilities in major operating systems and web browsers.

Anthropic PBC is introducing an updated version of its most powerful, widely available AI model, barely a week after it limited the release of a more advanced offering called Mythos.

Anthropic said Thursday that Opus 4.7 is meant to be better at software engineering, including fielding some of the hardest coding tasks that previously required greater supervision. The company said it follows a user’s instructions much better than previous models and can inspect higher-resolution images, making it possible to do things like identify details in complicated charts or pictures.

However, the new model is less broadly capable than Mythos, including for cybersecurity uses, Anthropic said. During the training process for Opus 4.7, Anthropic went so far as to experiment with ways to “differentially reduce” the model’s cyber capabilities, according to a company blog post.

WATCH: Anthropic PBC introduces an updated version of its AI model, Opus 4.7.

Last week, Anthropic warned that its Mythos system was able to identify and then exploit vulnerabilities “in every major operating system and every major web browser when directed by a user to do so.” The company decided to only make a version of the model available to select businesses to help them safeguard their software.

“We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,” the company said in the blog post. “What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.”

The Claude maker is locked in a heated rivalry with OpenAI to deploy better artificial intelligence models and convince more business customers to pay for them. In recent months, Anthropic has seen strong momentum for its AI coding offerings as well as growing traction with consumers amid a standoff with the Pentagon over AI safeguards.

Anthropic was most recently valued at $380 billion. The company is now fielding offers from investors for a new round of funding that could value it at about $800 billion or higher.

Follow all new stories by Rachel Metz

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-16/anthropic-unveils-updated-opus-model-aimed-at-advanced-coding)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[

Before it's here, it's on the Bloomberg Terminal

LEARN MORE

](https://archive.is/o/OauPd/https://www.bloomberg.com/professional/solution/bloomberg-terminal-learn-more/)

Up Next

Anthropic Releases AI Model With Weaker Cyber Skills Than Mythos

In this Article

[

Anthropic PBC

Private company

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/1892140D:US)

More From Bloomberg

[

House Democrats, Biden At Odds Over Enhanced Child Tax Credit

White House Works to Give US Agencies Anthropic Mythos AI

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-16/white-house-moves-to-give-us-agencies-anthropic-mythos-access)[

Anthropic Model Scare Sparks Urgent Bessent, Powell Warning To Bank CEOs

Anthropic Attracts Investor Offers at an $800 Billion Valuation

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-14/anthropic-attracts-investor-offers-at-a-800-billion-valuation)[

TOPSHOT-FRANCE-TECHNOLOGY-SCIENCE-IT-AI-ANTHROPIC

With Mythos, Anthropic Deserves Support, Not a Blacklisting

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-13/anthropic-sets-the-right-ai-standard-with-mythos)[

Anthropic CEO And Co-Founder Dario Amodei

Anthropic’s Mythos Is a Wake-up Call For Everyone, Not Just Banks

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/articles/2026-04-15/anthropic-mythos-ai-is-a-wake-up-call-for-everyone-not-just-banks)[

Anthropic Model Scare Sparks Urgent Bessent, Powell Warning To Bank CEOs

Anthropic Wins Accolades From Canada’s AI Minister Over Mythos Approach

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/articles/2026-04-14/anthropic-wins-accolades-from-canada-s-ai-minister-over-mythos-approach)

Top Reads

[

How Anthropic Learned Mythos Was Too Dangerous for the Wild

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/how-anthropic-discovered-mythos-ai-was-too-dangerous-for-release)[

Cómo Anthropic descubrió que Mythos era demasiado peligroso

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/como-anthropic-descubre-riesgos-del-modelo-de-ia-mythos)[

Canva Leans Into AI to Defend Its $42 Billion Empire

by Olivia Poh

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/canva-australia-s-tech-unicorn-turns-to-ai-for-risky-transformation)[

India’s Computer Science Grads Are Unprepared for the AI Revolution

by Saritha Rai

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/the-ai-revolution-leaves-india-s-tech-graduates-unprepared-for-the-future)

Technology

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-14/china-s-data-centers-are-plugging-into-reit-style-financing-wave)

By Bloomberg News

April 14, 2026 at 11:00 PM UTC

Takeaways by Bloomberg AI

  • China's data center operators are raising money from investors through a fast-growing asset-backed security market, with major operators selling about 9 billion yuan of these products over the past year.
  • The securities offer returns linked to the performance of underlying assets such as data centers, logistics parks or shopping malls, and typically generate annual distribution rates of around 4% to 8%.
  • The market for these securities has seen rapid uptake since its introduction less than three years ago, with almost 70% of total issuance coming in the past six months and volumes jumping more than tenfold from a year earlier.

China’s data center operators are tapping a fast-growing asset-backed security market, raising more than a billion dollars from investors hungry for higher yields.

The niche funding tool — known as holding-type real asset-backed securities — has seen rapid uptake since it was introduced less than three years ago. Almost 70% of total issuance has come in just the past six months, and volumes so far in 2026 have jumped more than tenfold from a year earlier, according to cn-abs.com, which tracks the local ABS market.

Major data center operators including GDS Holdings Ltd., VNET Group Inc. and Shanghai Yovole Networks Inc. have joined the wave, selling about 9 billion yuan ($1.3 billion) of the privately-placed products over the past year, the data show.

The unrated securities sit somewhere between traditional ABS and real-estate investment trusts, giving issuers more leeway in payments and fewer regulatory hurdles. Returns are more like dividends — instead of fixed coupons — linked to the performance of underlying assets such as data centers, logistics parks or shopping malls. China has been pushing to diversify funding channels beyond traditional borrowing and public listings.

Sales of Holding-Type Real Asset-Backed Securities Are Surging

Source: <a href="http://Cnabs.com" rel="nofollow">Cnabs.com</a>, Bloomberg

“This asset class has taken off largely because regulators see it as an effective way for companies to raise money,” said Yao Yu, founder of credit-rating startup RatingDog (Shenzhen) Information Technology Co. Investors such as insurers and securities firms like the product’s relatively stable, higher long-term yields — especially in China’s current low interest-rate environment, he added.

These products typically generate annual distribution rates of around 4% to 8%, backed by rental income that can extend for several decades, according to people familiar with the matter, who asked not to be identified discussing private information.

In many cases, the potential returns can be more than double the average yield on China’s onshore junk-rated corporate bonds, the people said.

The Shanghai Stock Exchange, where about 90% of these securities are listed, didn’t immediately respond to questions about its outlook for these products or what it’s doing to support investors and drive further growth.

China's Junk Bond Yield Sinks to Record Low

This site can’t be reached

www.bloomberg.com is currently unreachable.

ERR_CONNECTION_FAILED

null

www.bloomberg.com is currently unreachable.

Source: ChinaBond, Bloomberg

Many of the firms tapping this market have never sold public bonds in the nation’s domestic market. Private-sector companies and those in newer sectors have struggled to issue debt onshore, where investors tend to favor state-backed borrowers.

Data-center operators are drawn to it because building facilities to support artificial intelligence and cloud computing requires huge upfront investment, but the returns are generated slowly over time. That creates a mismatch between funding needs and cash flows.

The appeal extends beyond the tech sector.

Utilities, including toll-road operators and power producers, remain the largest issuers, reflecting their outsized role in the economy. They are followed by commercial property owners and data-center operators. Local government financing vehicles — investment arms that fund public infrastructure — have also been active as they’re seeking new funding channels after Beijing tightened scrutiny of local borrowing.

Wuhan Weineng Battery Asset Management Co., an electric-vehicle battery recycling and energy storage firm backed by NIO Inc. and Contemporary Amperex Technology Co., was among the latest borrowers to tap the market, raising 501 million yuan earlier this year.

A unit of distressed builder Seazen Holdings Co. priced 616 million yuan of such notes late last year and is applying to issue more. The company told investors it aims to raise as much as 8 billion yuan through these securities in 2026.

“Fixed-income investors in China have long been struggling in a low interest-rate environment, while issuers have been actively exploring diverse financing channels, so it’s a perfect match,” said Jerry Fang, managing director for structured finance ratings with S&P Global Ratings.

Broad Appeal

Holding-type real asset-backed securities' sales by sectors

Source: <a href="http://Cnabs.com" rel="nofollow">Cnabs.com</a>, Bloomberg

Holding-type ABS offers an alternative as it allows issuers to borrow against tangible assets rather than relying purely on their corporate credit profiles. China Securities Co. estimates that total sales may reach 700 billion yuan in the medium to long term.

Despite the rapid growth, some investors remain cautious as the market is still in its early stages, with limited liquidity and few clear exit options.

“Investors should be prepared for a long-holding period and it’s unclear how sustainable the rental income from these assets will be over time,” said Li Gen, founder of Beijing G Capital Private Fund Management Center.

Still, analysts expect demand to continue growing, especially as China’s digital infrastructure expands. Project financing and ABS have been important funding channels for data centers globally, according to S&P’s Fang. With their capital spending set to increase sharply, operators will seek more convenient financing options.

“The market will most likely continue to boom going forward,” he added.

[Contact us:

Provide news feedback or report an error

](https://archive.is/o/OauPd/https://www.bloomberg.com/help/question/submit-feedback-news-coverage/)

[Site feedback:

Take our Survey

](https://archive.is/o/OauPd/https://bmedia.iad1.qualtrics.com/jfe/form/SV_0xQ0jMsQ7QLRlj0?slug=2026-04-14/china-s-data-centers-are-plugging-into-reit-style-financing-wave)

[Confidential tip?

Send a tip to our reporters

](https://archive.is/o/OauPd/https://www.bloomberg.com/tips/)

[

Before it's here, it's on the Bloomberg Terminal

LEARN MORE

](https://archive.is/o/OauPd/https://www.bloomberg.com/professional/solution/bloomberg-terminal-learn-more/)

Up Next

Anthropic Releases AI Model With Weaker Cyber Skills Than Mythos

In this Article

[

GDS Holdings Ltd

42.97

0.16%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/GDS:US)

Follow

[

Vnet Group Inc

9.27

2.89%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/VNET:US)

Follow

[

Contemporary Amperex Technology Co Ltd

446.33

1.69%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/300750:CH)

Follow

[

NIO Inc

6.87

6.84%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/NIO:US)

Follow

[

Seazen Holdings Co Ltd

14.59

2.60%

](https://archive.is/o/OauPd/https://www.bloomberg.com/quote/601155:CH)

Follow

Top Reads

[

Inside Dixon Technologies Ltd.'s Smartphone Manufacturing Factory and New Factory Construction

China’s Control Over Tech Is Threatening India’s Manufacturing Dreams

by Alisha Sachdev

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-14/china-s-controls-on-battery-ev-tech-hurt-india-s-manufacturing-ambitions)[

TOPSHOT-US-POLITICS-TRUMP

The Iran War Has Finally Shattered America’s World

by Hal Brands

](https://archive.is/o/OauPd/https://www.bloomberg.com/opinion/features/2026-04-12/iran-war-trump-has-finally-shattered-america-s-world)[

How Anthropic Learned Mythos Was Too Dangerous for the Wild

by Margi Murphy, Jake Bleiberg and Patrick Howell O'Neill

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/how-anthropic-discovered-mythos-ai-was-too-dangerous-for-release)[

The $10 Billion Startup Training AI to Replace the White-Collar Workforce

by Tom Foster

](https://archive.is/o/OauPd/https://www.bloomberg.com/news/features/2026-04-16/ai-company-hiring-on-linkedin-wants-to-train-your-replacement-at-work)

Read the whole story
bogorad
8 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Subscribe to read

1 Share
  • Satellite Acquisition: The Iranian Revolutionary Guard Corps Procured A Chinese Spy Satellite Designated TEE 01B In Late 2024
  • Imaging Resolution: This Spacecraft Features Half Meter Resolution Technology Providing A Substantial Capability Upgrade Over Existing Iranian Systems
  • Operational Control: Emposat A Beijing Based Firm Provides The Ground Station Infrastructure Necessary For Iran To Direct The Satellite Operations
  • Targeting Activities: Iranian Military Commanders Tasked The Satellite To Survey Multiple United States Military Bases Including Locations In Saudi Arabia And Jordan
  • Strategic Context: Imagery Was Collected Immediately Before And After Strikes Conducted By Iran Against Military Sites Throughout The Middle East
  • Technological Origin: Earth Eye Co A Chinese Commercial Entity Executed An In Orbit Transfer To Convey The Satellite Control To The Iranian Military
  • Security Implications: This Arrangement Permits The Iranian Military To Utilize Global Distributed Ground Networks To Evade Potential Kinetic Strikes On Domestic Infrastructure
  • Official Denials: The Chinese Ministry Of Foreign Affairs Categorically Denied Reports Linking The Satellite Transfer To The Ongoing Regional Conflict

Iran secretly acquired a Chinese spy satellite that gave the Islamic republic a powerful new capability to target US military bases across the Middle East during the recent war, according to a Financial Times investigation.

Leaked Iranian military documents show the satellite, known as TEE-01B, was acquired by the Islamic Revolutionary Guard Corps’ Aerospace Force in late 2024 after it was launched into space from China.

Time-stamped coordinate lists, satellite imagery and orbital analysis show that Iranian military commanders later tasked the satellite to monitor key US military sites. The images were taken in March before and after drone and missile strikes on those locations.

TEE-01B was built and launched by Earth Eye Co, a Chinese company that says it offers “in-orbit delivery”, a little-known export model under which spacecraft launched in China are transferred to overseas customers after reaching orbit.

As part of the agreement, the IRGC was granted access to commercial ground stations operated by Emposat, a Beijing-based provider of satellite control and data services with a global network spanning Asia, Latin America and other regions.

The use of a Chinese-built satellite by the IRGC during a war where Tehran has repeatedly targeted its neighbours with missiles and drones is likely to be highly sensitive across the region. China is the largest trading partner of the Gulf countries and is the largest buyer of their oil.

The logs show that the satellite captured images of Prince Sultan Air Base in Saudi Arabia on March 13, 14 and 15. On March 14, US President Donald Trump confirmed US planes at the base had been hit. Five US Air Force refuelling planes were damaged.

The satellite also conducted surveillance of the Muwaffaq Salti Air Base in Jordan and locations close to the US Fifth Fleet naval base in Manama, Bahrain, and Erbil airport, Iraq, around the time of IRGC-claimed attacks on facilities in those areas.

Other areas surveilled by the satellite included Camp Buehring and Ali Al Salem air base in Kuwait, the Camp Lemonnier US military base in Djibouti and Duqm International Airport in Oman. Gulf civilian infrastructure monitored included the Khor Fakkan container port area and the Qidfa power and desalination plant in the United Arab Emirates, as well as the Alba facility in Bahrain — one of the world’s largest aluminium smelters.

“This satellite is clearly being used for military purposes, as it is being run by the IRGC’s Aerospace Force and not Iran’s civilian space programme,” said Nicole Grajewski, an expert on Iran at Sciences Po university.

“Iran really needs this foreign-provided capability during this war, as it allows the IRGC to identify targets ahead of time and check the success of its strikes,” she added.

TEE-01B is capable of capturing imagery at roughly half-metre resolution, comparable to high-resolution commercially available western satellite imagery. It represents a significant upgrade on Iran’s domestic capabilities and would allow analysts to identify aircraft, vehicles and changes to infrastructure.

By contrast, the IRGC Aerospace Force’s previously most advanced military satellite — the Noor-3 — was estimated, based on Iranian claims, to capture imagery at about 5 metres resolution, an improvement on the Noor-2 system’s 12-15 metre imagery but still about an order of magnitude less precise than the Chinese-built satellite and insufficient to identify aircraft or monitor activity at military bases.


TEE-01B greatly improves Iran’s surveillance capabilities

Reported resolution of selected Iranian satellites

TEE-01BNoor-3PayaZafar-2

0.5m5m10m15m

Satellite imagery shows military aircraft at Prince Sultan Air Base in Saudi Arabia on March 5 Planet Labs; FT reporting

Earth Eye Co says on its website that it has carried out one “in-orbit” transfer to an unnamed country that was part of China’s Belt and Road Initiative. Iran joined Belt and Road in 2021.

The company says on its website that the satellite was intended to be used for “agriculture, ocean monitoring, emergency management, natural resource supervision, and municipal transportation”.

In September 2024, the IRGC Aerospace Force — which oversees Iran’s ballistic missile, drone and space programmes — agreed to pay about Rmb250mn ($36.6mn) to acquire control over the satellite system, according to the documents seen by the FT.

A gold satellite covered with dark solar panels, standing upright on a stand in front of a dramatically lit background.

An image of the satellite Earth Eye says was transferred to ‘a Belt and Road country’ © Earth Eye Co

The renminbi-denominated agreement, signed by a brigadier general in the IRGC Aerospace Force, breaks down costs, including the satellite and its launcher, technical support, data infrastructure and services provided by a “foreign counterparty”.

Under the agreement, Emposat provides the IRGC with the software and ground network to run the satellite over its lifespan. These would send commands, receive telemetry and imagery, and allow the IRGC to direct the satellite’s operations from anywhere in the world.

“This amounts to a dispersion strategy for Iran’s space assets,” said Jim Lamson, a former CIA analyst focused on Iran and a senior research associate at the James Martin Center for Nonproliferation Studies.

“Iran’s satellite ground stations, which were hit in 2025 and 2026, can be hit very easily by missiles from a thousand miles away. You can’t just hit a Chinese ground station located in another country,” he added.

Israel’s military has said it has struck multiple space and satellite-related targets inside Iran during the current conflict, including the main research centre of the Iranian Space Agency in mid-March.

The IDF said the centre was being used to develop military satellites and intelligence collection, as well as “directing fire toward targets across the Middle East”.

Lamson said the TEE-01B satellite significantly expanded Iran’s ability to monitor US military assets.

“Iran has human intelligence assets around the region surveilling US military bases,” he said. “So if you are an Iranian military planner having a satellite like this available to combine with that, and also with Russian satellite imagery, is a powerful tool.”

LOADING

Video description

The TEE-01B satellite was launched from the Jiuquan Satellite Launch Center in northwest China in June 2024

The TEE-01B satellite was launched from the Jiuquan Satellite Launch Center in northwest China in June 2024 © Reuters

Iran’s expanding use of foreign satellite capabilities comes against a backdrop of deepening co-operation with Russia, which has launched several Iranian satellites in recent years.

China has sought to position its commercial space sector as civilian, even as its technologies are increasingly used in dual-use contexts.

US officials have been closely monitoring Chinese satellite companies believed to be supporting actors in the Middle East that threaten US security. The FT reported last year that Chang Guang Satellite Technology, a commercial group with ties to the Chinese military, provided satellite imagery to the Iran-backed Houthi rebels in Yemen to help them target US warships and international ships in the Red Sea.

Emposat, the Chinese company providing the ground infrastructure for the system, has been identified in a report by the House China committee as having close ties to China’s People’s Liberation Army Aerospace Force, including personnel linked to key satellite launch command centres.

While Emposat is a commercial company, it was founded by Richard Zhao, who spent 15 years working at the China Academy of Space Technology, a government-run organisation. Several of the top executives and engineers at Earth Eye also have connections to some of the Chinese universities that are dubbed the “seven sons of national defence” because of their close collaboration with the People’s Liberation Army, according to the group’s website.

“Emposat is a rising star in China’s commercial space sector, but it’s still a product of the state and military establishment,” said Aidan Powers-Riggs, an expert at the CSIS think-tank who has conducted research on the group. “It was founded by veterans of China’s state-run space programme and bankrolled by investment from national military-civil fusion funds.”

The revelation about the satellite contract comes as the US is more broadly concerned about Chinese help for Iran. Dennis Wilder, the former head of China analysis at the CIA, said China has a history of providing Iran with weapons as part of a pragmatic strategy to influence the Islamic republic on other issues, including in the past sending Silkworm anti-ship missiles that were used to obstruct shipping in the Strait of Hormuz.

One person familiar with the situation said the US had seen indications that China was considering providing Iran with the kind of shoulder-fired missiles that Iran recently used to shoot down an American F-15 fighter jet. The CIA declined to comment on the situation, which was first reported by CNN.

While Emposat operates commercially, analysts say such links underline the blurred boundary between civilian and military space capabilities in China.

“There is no way that any Chinese company could do something like launch a satellite without somebody in the administration giving it the go-ahead,” said one former senior western intelligence official. “I think it’s been very clear for some time that China has been helping the Iranians with intelligence, but trying to keep the hand of government hidden.”

Asked about the results of the FT’s investigation, China’s Ministry of Foreign Affairs said: “The relevant reports are untrue. This is a war that should never have happened, and everyone is clear about its origins. Recently, certain forces have been keen to fabricate rumours and maliciously link them to China. China firmly opposes this kind of ill-intentioned conduct.”

China’s Ministry of Commerce, Earth Eye and Emposat did not respond to requests for comment.

The CIA declined to comment. The White House did not comment specifically on the connection between Emposat and the IRGC. But a spokesperson referred to comments President Donald Trump made at the weekend when he warned that China would face “big problems” if it provided Iran with air defence systems.

Additional reporting by Chris Campbell in London and Joe Leahy in Beijing

Methodology

The FT analysed TEE-01B’s orbit using public US Space Force tracking data and found the satellite was in position to observe the locations at the times listed in documents shared with the FT. In each case, the timestamps, coordinates and sensor angles were consistent with the satellite’s position.

Government statements and media reports show many of the locations were hit in the days before or after the satellite’s passes. The FT identified potential damage using radar imagery from the European Space Agency’s Sentinel-1 satellite and medium-resolution imagery from Sentinel-2.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Scientists reverse brain aging, with a nasal spray – Texas A&M Stories

1 Share
  • Neuroinflammaging Concept: Chronic Brain Inflammation Causes Cognitive Decline And Memory Loss During Aging Processes
  • Innovative Nasal Spray: Texas AM Researchers Developed A Noninvasive Therapeutic Delivery Method For Treating Brain Aging
  • Treatment Mechanism: Extracellular Vesicles Utilize Nasal Absorption To Deliver Regulatory MicroRNA Directly Into Brain Tissue
  • Cellular Restoration: The Therapy Successfully Recharges Mitochondrial Function While Suppressing Inflammatory Signaling Pathways Like NLRP3
  • Observed Outcomes: Clinical Models Demonstrated Significant Improvements In Memory Retention And Environmental Adaptability Within Weeks
  • Universal Applicability: Research Findings Indicate That The Intranasal Treatment Maintains Consistent Effectiveness Across Both Biological Sexes
  • Future Potential: Potential Applications Extend Beyond General Aging To Include Recovery Opportunities For Human Stroke Survivors
  • Patent Milestones: Researchers Have Filed A United States Patent To Facilitate The Translation Of Lab Discoveries Into Clinical Practice

New therapy is turning back the clock in aging brains, healing inflammation, restoring memory and reshaping the future of brain age-related therapies

Credit: Getty Images

Picture this: your brain is a high-performance engine. Over decades, it doesn’t just wear down, it also starts to run hot.

Tiny “fires” of inflammation smolder deep within the brain’s memory center, creating a persistent brain fog that makes it harder to think, form new memories or even adapt to new environments, all the while increasing the risk to disorders like Alzheimer’s disease.

Scientists call this slow burn “neuroinflammaging,” and for decades it was thought to be the inevitable price of growing older.

Until now.

A landmark study from researchers at the Texas A&M University Naresh K. Vashisht College of Medicine suggests the inflammatory tide responsible for brain aging and brain fog might actually be reversible. And the solution doesn’t involve brain surgery, but a simple nasal spray.

Led by Dr. Ashok Shetty, university distinguished professor and associate director of the Institute for Regenerative Medicine, along with senior research scientists Dr. Madhu Leelavathi Narayana and Dr. Maheedhar Kodali, the team developed a nasal spray that, with just two doses, dramatically reduced brain inflammation, restored the brain’s cellular power plants and significantly improved memory.

The most surprising part? It all happened within weeks and lasted for months.

The findings, published in the Journal of Extracellular Vesicles, could reshape the future of neurodegenerative therapies and may even change how scientists think about brain aging itself.

ashok shetty working in lab

Brain age-related diseases like dementia are a major health concern worldwide. What we’re showing is brain aging can be reversed, to help people stay mentally sharp, socially engaged and free from age-related decline.

Dr. Ashok K. Shetty University Distinguished Professor Texas A&M University Naresh K. Vashisht College of Medicine

Brain fog to brain focus, the future of cognitive therapy

The implications of this research could be nothing short of revolutionary.

“As we develop and scale this therapy, a simple, two-dose nasal spray could one day replace invasive, risky procedures or maybe even months of medication,” Shetty said.

The societal impact could be just as profound. In the United States alone, new dementia cases are projected to double over the next four decades, from about 514,000 in 2020 to about 1 million in 2060.

“The trend signals a pressing need for policies and innovative interventions that can minimize both the risk and severity of neurodegenerative disorders like dementia,” Shetty said.

The study also hints at broad applicability, working equally effectively across both genders — a rare outcome in biomedical research.

“It’s universal,” Shetty said. “Treatment outcomes were consistent and similar across both sexes.”

One day, the approach could even help stroke survivors rebuild lost brain function, or slow — even reverse — the effects of cognitive aging in humans.

“Our approach redefines what it means to grow old,” Shetty said. “We’re aiming for successful brain aging: keeping people engaged, alert and connected. Not just living longer, but living smarter and healthier,” Shetty said.

Rewiring the brain from the inside out

At the heart of this groundbreaking development are millions of microscopic biological parcels known as extracellular vesicles (EVs). They act like delivery vehicles, carrying powerful genetic cargo called microRNAs.

“MicroRNAs act like master regulators,” Narayana said. “They help modulate and regulate many gene and signaling pathways in the brain.”

But the delivery route is just as important as the cargo.

Packed into a nasal spray, the tiny EVs bypass the brain’s protective shield and travel directly into brain tissue, where they are absorbed.

Scenes from the Shetty lab, with two students mixing two solutions into a centrifuge.

In Shetty’s lab, researchers develop an innovative nasal spray targeting brain aging.

Credit: Texas A&M University Division of Marketing and Communications

“The mode of delivery is one of the most exciting aspects of our approach,” Kodali said. “Intranasal delivery allows us to reach, and treat, the brain directly without invasive procedures.”

Once absorbed into the brain’s resident immune cells, the microRNAs suppress systems, like NLRP3 inflammasome and the cGAS–STING signaling pathways, known to drive chronic inflammation in aging brains.

At a cellular level, the treatment recharged neuronal mitochondria, or the power plants that live inside the brain’s cells.

By recharging these cellular power plants, the therapy didn’t just clear brain fog, it physically improved the brain’s ability to process and store information.

“We are giving neurons their spark back by reducing oxidative stress and reactivating the brain’s mitochondria,” Narayana said.

Behavioral tests confirmed the biology. Models treated with the nasal spray showed remarkable improvements in not only recognizing familiar objects but also detecting new objects and changes in their environment, a sharp contrast to the control.

“We are seeing the brain’s own repair systems switch on, healing inflammation and restoring itself,” Shetty said.

While further research is needed, Shetty and his team have already filed a U.S. patent for the therapy, marking a milestone in what could become a breakthrough for brain aging treatments.

Behind the breakthrough

Breakthroughs like the one led by Shetty highlight Texas A&M as a research powerhouse, where national and global research priorities help shape the next generation of innovative solutions.

“We aren’t just trying to understand the biological mechanisms, we are translating and developing our findings into real-world therapies that could make a difference,” Shetty said.

Scenes from the Shetty lab, with all researchers standing and facing the camera for an image.

With support from the NIA, discoveries like Shetty’s highlight Texas A&M’s role as a leader in research, where global and national priorities inspire the next wave of innovation.

Credit: Texas A&M University Naresh K. Vashisht College of Medicine

Backed by the National Institute on Aging (NIA), the Texas A&M team pooled collaborative knowledge, expertise and resources to turn a simple nasal spray into a therapy with the potential to reframe how scientists think about brain aging.

“Our partnership with the NIA is very important,” Shetty said. “This kind of work requires resources and the right people to tackle problems and develop solutions that could change lives.”

Ultimately, while the brain’s engine may sputter with age, scientists are now learning how to reignite it, sparking a new era of cognitive health and showing that the clock on brain aging might not just be paused, it can be turned back.

More information: Intranasal Human NSC‑Derived EVs Therapy Can Restrain Inflammatory Microglial Transcriptome, and NLRP3 and cGAS‑STING Signalling, in Aged Hippocampus, Journal of Extracellular Vesicles 15(2): e70232 (2026).

DOI: 10.1002/jev2.70232

Journal information: Journal of Extracellular Vesicles

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Anthropic shifts enterprise billing to usage-based pricing

1 Share
  • Pricing Structure Shift: Anthropic has transitioned its enterprise plan from flat seat fees to usage-based billing, requiring customers to pay for Claude, Claude Code, and Cowork at standard API rates.
  • Service Reliability Issues: Claude reported an API uptime of 98.95% over a recent 90-day period, significantly lower than the 99.99% industry standard, leading some enterprise clients to migrate to alternative providers.
  • Compute Resource Constraints: Rising costs and supply shortages for hardware, specifically Blackwell GPUs, alongside increasing power grid demands, have necessitated the removal of subsidized flat-rate access.
  • Agentic Workload Impact: The adoption of automated agentic workflows has caused a substantial surge in token consumption, rendering previous fixed-price subscription models financially unsustainable for the provider.
  • Industry Wide Transition: Reflecting broader market conditions, other major AI organizations are similarly moving away from flat-fee subscriptions toward usage-metered pricing models for compute-heavy services.

Anthropic has restructured its enterprise plan to bill Claude, Claude Code, and Cowork usage separately from seat fees, moving its largest business customers to per-token pricing at standard API rates, according to the company's updated enterprise help documentation. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms. The Information first reported the shift, describing it as tied to a deepening compute crunch that will raise bills for heavy business users significantly.

That is the official version of the story. The unofficial version starts with David Hsu.

David Hsu prefers Claude Opus 4.6. He said so publicly. Then he moved his company off it.

Hsu runs Retool, the software platform. He told the Wall Street Journal that Anthropic's model was better, but the service kept dying. His customers could not ship code when Claude choked. So he picked OpenAI, the inferior option, because the inferior option stayed up.

That is the tell.

Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, according to WSJ reporting. Established cloud providers commit to 99.99. One extra nine does not sound like much until you translate it. At Anthropic's rate, that is roughly 92 hours of downtime a year. At the 99.99 standard, 53 minutes. For an enterprise buyer, that is the difference between a tool and a liability.

And then, quietly, Anthropic changed how it bills.

The Breakdown

  • Anthropic quietly moved enterprise seats to usage-based billing, unbundling Claude, Claude Code, and Cowork from flat fees.
  • Claude API uptime hit 98.95% over 90 days ending April 8, far below the 99.99% standard, pushing Retool founder David Hsu to switch to OpenAI.
  • Revenue tripled from $9B to $30B in four months, but session caps, cache TTL cuts, and OpenClaw metering show the subsidy bleeding.
  • Expect every major AI provider running agentic workloads to move to usage-based billing within six months.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

What Anthropic stopped pretending

The enterprise help center now reads like a concession letter dressed as documentation. "The seat fee only covers access to the platform and doesn't include any usage," it says. "All usage across Claude, Claude Code, and Cowork is billed separately at standard API rates, based on what your team actually consumes." Anthropic is not framing the change as a price hike. The direction is unmistakable anyway. Run rate at the end of 2025 was $9 billion. Run rate today is $30 billion. And the company's message to its biggest customers is: budget more for compute.

That is not a typo. Revenue tripled in four months. The response was a billing restructure, not a victory lap.

The open bar was always a subsidy

For about eighteen months, the AI industry sold a story. The story was not that the models were good. The story was that the pricing was stable.

A $20 Claude Pro subscription gave you access to one of the best coding models in the world, running on compute that cost Anthropic significantly more than you paid. Power users always understood this. The arithmetic was never a secret. A single engineer on Claude Code burns through far more inference than a Pro fee could ever cover. The subscription was an open bar. Anthropic was buying the drinks.

Open bars work until somebody shows up thirsty. In this case, the thirsty somebody is the agent.

Agentic workflows do not sip. They chain tools across steps and run loops without asking. They spawn subagents carrying their own contexts. Cached tokens burn by the hundred thousand. One engineer running Claude Code overnight can consume the token budget of 200 casual chat users. OpenAI, running the same math on its own platform, watched token usage jump from 6 billion per minute in October to 15 billion per minute by late March. That is not growth. That is a break in the load curve.

Get Implicator.ai in your inbox

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers onto three-year contracts. Bank of America expects compute demand to outstrip supply through 2029. PJM, the grid operator for the eastern US, warned Friday that it needs 15 gigawatts of additional power tied to AI by early 2027.

The subsidy stopped being theoretical. It started bleeding.

The open bar closes

Watch the pattern. In late March, Anthropic quietly tightened five-hour session limits for Pro and Max users during peak hours, 5 a.m. to 11 a.m. Pacific, weekdays. About seven percent of users started hitting caps they had not hit before, per the company's own disclosure. Claude Code's prompt-cache time-to-live shifted back from one hour to five minutes in early March, driving up quota burn for long coding sessions. OpenClaw, a popular agent framework, got pulled out of the flat-fee bundle on April 4 and moved into usage-metered billing, with heavy users facing bills potentially 50 times higher.

Each move was defended as a "product change" or an "optimization." None was framed as a price hike. Put them together and the shape is obvious. Anthropic is lifting features out of the subscription, welding meters onto each one, and sending customers the difference.

You can feel the institutional anxiety in the public responses. Anthropic engineers denied, on X and GitHub, that the company had degraded Claude. They are not wrong about the model. They are cornered on the framing. The model is the same. The economics underneath it are not.

One AMD senior director filed a data-heavy GitHub complaint on April 2, analyzing 6,852 Claude Code session files to argue Claude had stopped reasoning as deeply. Anthropic's Claude Code lead walked through her analysis, conceded that thinking-effort defaults had been lowered on March 3 to "the best balance across intelligence, latency and cost," and explained the rest as UI changes. Translate: we dialed down the compute and hoped nobody would notice.

People noticed. That is why the usage-based migration is happening in public now. Enterprise buyers will not accept a silent shrinkflation. They want the meter visible, the billing legible, and someone else to blame when the monthly number gets ugly.

Who pays for the buildout

The shift does real work for Anthropic. It transfers compute cost from balance sheet to invoice. It lets the company keep signing million-dollar accounts without needing to pre-buy the GPU capacity to serve them flat-rate. It gives Anthropic a clean story for Wall Street. Every new revenue dollar is now backed by a compute dollar, metered and paid.

It breaks something else, though. Claude's growth engine was individual developers falling in love with Claude Code on their own dime, dragging it into their startups, then pushing employers to buy enterprise seats. That pipeline runs on a cheap Pro plan with enough headroom to experiment. Tighten the plan, meter the agents, add surprise quota math, and you poison the upstream.

Business Insider spoke with three of those users last week. One restructured his entire workday around limit resets. Another now breaks a single project into four micro-chats to conserve tokens. A third simply stops working when the cap hits, because manual coding feels pointless after Claude has spoiled him. The affection is intact. The relationship is not.

The crack nobody is naming

Here is the part Anthropic will not say out loud. Axios went hunting for a historical comparison and came back empty. Google's search-advertising ramp was the previous record. Anthropic covered nearly four times that ground in a single quarter. Snowflake took a decade to reach a billion in run rate. Anthropic added twenty-nine billion in roughly a year. More than 1,000 customers now pay over a million dollars a year for Claude.

None of those numbers mean anything if the service cannot stay up. 98.95 percent is the crack. Enterprise buyers have been trained by twenty years of cloud discipline to treat that figure as disqualifying, and some are already voting with their wallets. Ramp card data shows Anthropic gaining on OpenAI in business spend, yes. That data lags the service degradation. The next three months test whether customer affection for Claude outruns exhaustion with its downtime.

Expect two things from here. First, every major AI provider running agentic workloads moves to usage-based enterprise billing within six months. OpenAI already started, shifting Codex from flat-message to token metering in early April. GitHub tightened Copilot limits on April 10. Windsurf swapped credits for daily quotas. The flat-fee era is over. The memo is circulating.

Second, Anthropic keeps telling two stories at once. The growth story for investors goes like this. Thirty billion dollars, 1,000 million-dollar accounts, a three-year curve that would embarrass Rockefeller. The rationing story for users lives somewhere else entirely. Capacity tightens during peak hours. Effort defaults quietly dropped in March. Session caps hit sooner than they used to. OpenClaw got its own meter. Both stories are true. Neither is sustainable without the other eventually catching up.

David Hsu made his call already. He kept the inferior model because it stayed up. That is the verdict that should scare Anthropic more than any benchmark screenshot on X. It is not a performance problem. It is a trust problem. You cannot patch trust with a pricing page.

Frequently Asked Questions

What actually changed in Anthropic's enterprise pricing?

The enterprise plan's seat fee now only covers platform access. Usage across Claude, Claude Code, and Cowork is billed separately at standard API rates based on actual consumption. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms.

Why did Retool's founder switch from Claude to OpenAI?

David Hsu told the Wall Street Journal he preferred Anthropic's Opus 4.6 model, but the service kept going down. Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, compared to the 99.99 percent standard that established cloud providers maintain. That gap translates to roughly 100 hours of downtime a year versus 50 minutes.

What is the compute crunch behind the pricing shift?

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers into three-year contracts. Bank of America expects demand to outstrip supply through 2029. PJM, the eastern US grid operator, is looking for 15 gigawatts of additional power tied to AI by early 2027.

What did Anthropic change about Claude Code sessions?

In late March, Anthropic tightened five-hour session limits for Pro and Max users during weekday peak hours of 5 a.m. to 11 a.m. Pacific. About 7 percent of users started hitting caps they had not hit before. Claude Code's prompt-cache time-to-live shifted from one hour back to five minutes in early March, driving up quota burn for long coding sessions.

Are other AI providers moving to usage-based billing too?

Yes. OpenAI shifted Codex from flat-message pricing to token metering in early April and introduced a new $100 Pro tier for compute-heavy coding. GitHub tightened Copilot limits on April 10. Windsurf replaced its credit system with daily and weekly quotas in March. The flat-fee subscription era for agentic workloads is effectively over.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

[

The Intelligence Arbitrage Is Closing

Implicator PRO Briefing / 24 Mar 2026   Pro Members Only AI token prices crashed 99% in three years. Six providers sell comparable intelligence at commodity rates. This analysis maps the fi

The Implicator

](https://www.implicator.ai/the-intelligence-arbitrage-is-closing/)

[

Seven Chips Ship. Seventy Percent Walk.

San Francisco | March 17, 2026 Jensen Huang held up a chip yesterday in San Jose. Then he held up six more. Nvidia's Vera Rubin platform ships seven processors as one organism, backed by $1 trillion

The Implicator

](https://www.implicator.ai/seven-chips-ship-seventy-percent-walk/)

[

Anthropic and Google circle a cloud pact measured in tens of billions

A compute-for-TPUs deal would anchor Anthropic on Google Cloud while keeping AWS firmly in the frame. A six-figure hourly compute bill meets a hyperscaler eager for flagship AI workloads. Anthropic i

The Implicator

](https://www.implicator.ai/anthropic-and-google-circle-a-cloud-pact-measured-in-tens-of-billions/)

Analysis

Share X / Twitter LinkedIn Email

Marcus Schuler

Marcus Schuler

San Francisco

Editor-in-Chief and founder of Implicator.ai. Former ARD correspondent and senior broadcast journalist with 10+ years covering tech. Writes daily briefings on policy and market developments. Based in San Francisco. E-mail: editor@implicator.ai

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Is Anthropic 'nerfing' Claude? Users increasingly report performance degradation as leaders push back | VentureBeat

1 Share
  • User Accusations: Developers And AI Power Users Claim That Claude Model Performance Has Diminished Recently
  • Shrinkflation Concerns: Public Discourse Suggests Customers Are Receiving Reduced Model Capability For The Same Financial Cost
  • Detailed Analysis: An AMD Official Conducted A Comprehensive Study Of Claude Code Logs Suggesting A Regression In Reasoning Depth
  • Corporate Denial: Anthropic Representatives Maintain That The Model Weights Remain Unchanged Regardless Of User Perception
  • Product Adjustments: The Company Acknowledges Modifying Default Effort Levels And Interface Elements To Balance Latency And Costs
  • Benchmark Disputes: Viral Claims Regarding Model Hallucinations Have Faced Peer Rebuttal Due To Inconsistent Comparison Methodology
  • Usage Restrictions: Confirmed Changes To Session Limits And Cache Settings Have Intensified User Suspicions Regarding System Performance
  • Trust Deficit: A Growing Gap Exists Between Internal Product Definitions Of Model Quality And The Practical Experiences Of Professional Users

A growing number of developers and AI power users are taking to social media to accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code — intentionally or as an outcome of compute limits — arguing that the company’s flagship coding model feels less capable, less reliable and more wasteful with tokens than it did just weeks ago.

The complaints have spread quickly on Github, X and Reddit over the past several weeks, with several high-reach posts alleging that Claude has become worse at sustained reasoning, more likely to abandon tasks midway through, and more prone to hallucinations or contradictions.

Some users have framed the issue as “AI shrinkflation” — the idea that customers are paying the same price for a weaker product.

Others have gone further, suggesting Anthropic may be throttling or otherwise tuning Claude downward during periods of heavy demand.

Those claims remain unproven, and Anthropic employees have publicly denied that the company degrades models to manage capacity. At the same time, Anthropic has acknowledged real changes to usage limits and reasoning defaults in recent weeks, which has made the broader debate more combustible.

VentureBeat has reached out to Anthropic for further clarification on the recent accusations, including whether any recent changes to reasoning defaults, context handling, throttling behavior, inference parameters or benchmark methodology could help explain the spike in complaints.

We have also asked how Anthropic explains the recent benchmark-related claims and whether it plans to publish additional data that could reassure customers. An Anthropic spokesperson did not address the questions individually, instead referring us to X posts by Claude Code creator Boris Cherny and Claude Code team member Thariq Shihipar regarding Opus 4.6 performance and usage limits, respectively. Both X posts are also referenced and linked below.

Viral user complaints, including from an AMD Senior Director, argue Claude has become less capable

One of the most detailed public complaints originated as a GitHub issue filed by Stella Laurenzo on April 2, 2026, whose LinkedIn profile identifies her as Senior Director in AMD’s AI group.

In that post, Laurenzo wrote that Claude Code had regressed to the point that it could not be trusted for complex engineering work, then backed that claim with a sprawling analysis of 6,852 Claude Code session files, 17,871 thinking blocks and 234,760 tool calls.

The complaint argued that, starting in February, Claude’s estimated reasoning depth fell sharply while signs of poorer performance rose alongside it, including more premature stopping, more “simplest fix” behavior, more reasoning loops, and a measurable shift from research-first behavior to edit-first behavior.

The post’s broader point was that for advanced engineering workflows, extended reasoning is not a luxury but part of what makes the model usable in the first place.

That GitHub thread then escaped into the broader social media conversation, with X users including @Hesamation, who posted screenshots of Laurenzo's GitHub post to X on April 11, turning it into an even more viral talking point.

That amplification mattered because it gave the wider “Claude is getting worse” narrative something more concrete than anecdotal frustration: a long, data-heavy post from a senior AI leader at a major chip company arguing that the regression was visible in logs, tool-use patterns and user corrections, not just gut feeling.

Anthropic’s public response focused on separating perceived changes from actual model degradation. In a pinned follow-up on the same GitHub issue posted a week ago, Claude Code lead Boris Cherny thanked Laurenzo for the care and depth of the analysis but disputed its main conclusion.

Cherny said the “redact-thinking-2026-02-12” header cited in the complaint is a UI-only change that hides thinking from the interface and reduces latency, but “does not impact thinking itself,” “thinking budgets,” or how extended reasoning works under the hood.

He also said two other product changes likely affected what users were seeing: Opus 4.6’s move to adaptive thinking by default on Feb. 9, and a March 3 shift to medium effort, or effort level 85, as the default for Opus 4.6, which he said Anthropic viewed as the best balance across intelligence, latency and cost for most users.

Cherny added that users who want more extended reasoning can manually switch effort higher by typing /effort high in Claude Code terminal sessions.

That exchange gets at the core of the controversy. Critics like Laurenzo argue that Claude’s behavior in demanding coding workflows has plainly worsened and point to logs and usage patterns as evidence.

Anthropic, by contrast, is not saying nothing changed. It is saying the biggest recent changes were product and interface choices that affect what users see and how much effort the system expends by default, not a secret downgrade of the underlying model. That distinction may be technically important, but for power users who feel the product is delivering worse results, it is not necessarily a satisfying one.

External coverage from TechRadar and PC Gamer further amplified Laurenzo's post and larger wave of agreement from some power users.

Another viral post on X from developer Om Patel on April 7 made the same argument in even more direct terms, claiming that someone had “actually measured” how much “dumber” Claude had gotten and summarizing the result as a 67% drop.

That post helped popularize the “AI shrinkflation” label and pushed the controversy beyond hard-core Claude Code users into the broader AI discourse on X.

These claims have resonated because they map closely onto what many frustrated users say they are seeing in practice: more unfinished tasks, more backtracking, more token burn and a stronger sense that Claude is less willing to reason deeply through complicated coding jobs than it was earlier this year.

Benchmark posts turned anecdotal frustration into a public controversy

The loudest benchmark-based claim came from BridgeMind, which runs the BridgeBench hallucination benchmark. On April 12, the account posted that Claude Opus 4.6 had fallen from 83.3% accuracy and a No. 2 ranking in an earlier result to 68.3% accuracy and No. 10 in a new retest, calling that proof that “Claude Opus 4.6 is nerfed.”

That post spread widely and became one of the main anchors for the broader public case that Anthropic had degraded the model.

Other users also circulated benchmark-related or test-based posts suggesting that Opus 4.6 was underperforming versus Opus 4.5 in practical coding tasks.

Still other posts pointed to TerminalBench-related results as supposed evidence that the model’s behavior had changed in certain harnesses or product contexts.

The effect was cumulative: benchmark screenshots, side-by-side tests and anecdotal frustration all began reinforcing one another in public.

That matters because benchmark claims tend to travel farther than more subjective complaints. A developer saying a model “feels worse” is one thing. A screenshot showing a ranking drop from No. 2 to No. 10, or a dramatic percentage swing in accuracy, gives the appearance of hard proof, even when the underlying comparison may be more complicated.

Critics of the benchmark claims say the evidence is weaker than it looks

The most important rebuttal to the BridgeBench claim did not come from Anthropic. It came from Paul Calcraft, an outside software and AI researcher on X, who argued that the viral comparison was misleading because the earlier Opus 4.6 result was based on only six tasks while the later one was based on 30.

In his words, it was a “DIFFERENT BENCHMARK.” He also said that on the six tasks the two runs shared in common, Claude’s score moved only modestly, from 87.6% previously to 85.4% in the later run, and that the bigger swing appeared to come mostly from a single fabrication result without repeats. He characterized that as something that could easily fall within ordinary statistical noise.

That outside rebuttal matters because it undercuts one of the cleanest and most viral claims in circulation. It does not prove users are wrong to think something has changed. But it does suggest that at least some of the benchmark evidence now driving the story may be overstated, poorly normalized or not directly comparable.

Even the BridgeBench post itself drew a community note to similar effect. The note said the two benchmark runs covered different scopes — six tasks in one case and 30 in the other — and that the common-task subset showed only a minor change. That does not make the later result meaningless, but it weakens the strongest version of the “BridgeBench proved it” argument.

This is now a key feature of the controversy: the claims are not all equally strong. Some are grounded in first-hand user experience. Some point to real product changes. Some rely on benchmark comparisons that may not be apples-to-apples. And some depend on inferences about hidden system behavior that users outside Anthropic cannot directly verify.

Earlier capacity limits gave users a reason to suspect more changes under the hood

The current backlash also lands in the shadow of a real, confirmed Anthropic policy change from late March. On March 26, Anthropic technical staffer Thariq Shihipar posted that, “To manage growing demand for Claude,” the company was adjusting how 5-hour session limits work for Free, Pro and Max subscribers during peak hours, while keeping weekly limits unchanged.

He added that during weekdays from 5 a.m. to 11 a.m. Pacific time, users would move through their 5-hour session limits faster than before. In follow-up posts, he said Anthropic had landed efficiency wins to offset some of the impact, but that roughly 7% of users would hit session limits they would not have hit before, particularly on Pro tiers.

In an email on March 27, 2026, Anthropic told VentureBeat that Team and Enterprise customers were not affected by those changes, and that the shift was not dynamically optimized per user but instead applied to the peak-hour window the company had publicly described. Anthropic also said it was continuing to invest in scaling capacity.

Those comments were about session limits, not model downgrades. But they are important context, because they establish two things that users now keep connecting in public: first, Anthropic has been dealing with surging demand; second, it has already changed how usage is rationed during busy periods. That does not prove Anthropic reduced model quality. It does help explain why so many users are primed to believe something else may also have changed.

Prompt caching and TTL

A separate, more recent GitHub issue broadens the dispute beyond model quality and into pricing and quota behavior. In issue #46829, user seanGSISG argued that Claude Code’s prompt-cache time-to-live, or TTL, appeared to shift from a one-hour setting back to a five-minute setting in early March, based on analysis of nearly 120,000 API calls drawn from Claude Code session logs across two machines.

The complaint argues that this change drove meaningful increases in cache-creation costs and quota burn, especially for long-running coding sessions where cached context expires quickly and must be rebuilt. The author claims that this helps explain why some subscription users began hitting usage limits they had not previously encountered.

What makes this issue notable is that Anthropic did not flatly deny that something changed. In a reply on the thread, Jarred Sumner said the March 6 change was real and intentional, but rejected the framing that it was a regression. He said Claude Code uses different cache durations for different request types, and that one-hour cache is not always cheaper because one-hour writes cost more up front and only save money when the same cached context is reused enough times to justify it.

In his telling, the change was part of ongoing cache optimization work, not a silent downgrade, and the pre–March 6 behavior described in the issue “wasn’t the intended steady state.”

The thread later drew a more detailed response from Anthropic’s Cherny, who described one-hour caching as “nuanced” and said the company has been testing heuristics to improve cache hit rates, token usage and latency for subscribers. Cherny said Anthropic keeps five-minute cache for many queries, including subagents that are rarely resumed, and said turning off telemetry also disables experiment gates, which can cause Claude Code to fall back to a five-minute default in some cases.

He added that Anthropic plans to expose environment variables that let users force one-hour or five-minute cache behavior directly. Together, those replies do not validate the issue author’s claim that Anthropic silently made Claude Code more expensive overall, but they do confirm that Anthropic has been actively experimenting with cache behavior behind the scenes during the same period users began complaining more loudly about quota burn and changing product behavior.

Anthropic says user-facing changes, not secret degradation, explain much of the uproar

Anthropic-affiliated employees have publicly pushed back on the broadest accusations. In one widely circulated reply on X, Cherny responded to claims that Anthropic had secretly nerfed Claude Code by writing, “This is false.”

He said Claude Code had been defaulted to medium effort in response to user feedback that Claude was consuming too many tokens, and that the change had been disclosed both in the changelog and in a dialog shown to users when they opened Claude Code.

That response is notable because it concedes a meaningful product change while rejecting the more conspiratorial interpretation of it. Anthropic is not saying nothing changed. It is saying that what changed was disclosed and was aimed at balancing token use, not secretly reducing model quality.

Public documentation also supports the fact that effort defaults have been in motion. Claude Code’s changelog says that on April 7, Anthropic changed the default effort level from medium to high for API-key users as well as Bedrock, Vertex, Foundry, Team and Enterprise users.

That suggests Anthropic has actively been tuning these settings across different segments, which could plausibly affect user perceptions even if the core model weights are unchanged.

Shihipar has also directly denied the broader demand-management accusation. In a reply on X posted April 11, he said Anthropic does not “degrade” its models to better serve demand. He also said that changes to thinking summaries affected how some users were measuring Claude’s “thinking,” and that the company had not found evidence backing the strongest qualitative claims now spreading online.

The real issue may be trust as much as model quality

What is clear is that a trust gap has opened between Anthropic and some of its most demanding users.

For developers who rely on Claude Code all day, subtle shifts in visible thinking output, effort defaults, token burn, latency tradeoffs or usage caps can feel indistinguishable from a weaker model.

That is true whether the root cause is a product setting, a UI change, an inference-policy tweak, capacity pressure or a genuine quality regression.

It also means both sides of the fight may be talking past each other. Users are describing what they experience: more friction, more failures and less confidence. Anthropic is responding in product terms: effort defaults, hidden thinking summaries, changelog disclosures, and denials that demand pressure is causing secret model degradation.

Those are not necessarily incompatible descriptions. A model can feel worse to users even if the company believes it has not “nerfed” the underlying model in the way critics allege. But coming at a time when Anthropic's chief rival OpenAI has recently pivoted and put more resources behind its competing, enterprise and vibe-coding focused product Codex — even offering a new, more mid-range ChatGPT subscription in an effort to boost usage of the tool — it's certainly not the kind of publicity that stands to benefit Anthropic or its customer retention.

At the same time, the public evidence remains mixed. Some of the most viral claims have come from developers with detailed logs and strong opinions based on repeated use. Some of the benchmark evidence has been challenged by outside observers on methodological grounds. And Anthropic’s own recent changes to limits and settings ensure that this debate is happening against a backdrop of real adjustments, not pure rumor.

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories