Strategic Initiatives
12205 stories
·
45 followers

Subscribe to read

1 Share
  • Satellite Acquisition: The Iranian Revolutionary Guard Corps Procured A Chinese Spy Satellite Designated TEE 01B In Late 2024
  • Imaging Resolution: This Spacecraft Features Half Meter Resolution Technology Providing A Substantial Capability Upgrade Over Existing Iranian Systems
  • Operational Control: Emposat A Beijing Based Firm Provides The Ground Station Infrastructure Necessary For Iran To Direct The Satellite Operations
  • Targeting Activities: Iranian Military Commanders Tasked The Satellite To Survey Multiple United States Military Bases Including Locations In Saudi Arabia And Jordan
  • Strategic Context: Imagery Was Collected Immediately Before And After Strikes Conducted By Iran Against Military Sites Throughout The Middle East
  • Technological Origin: Earth Eye Co A Chinese Commercial Entity Executed An In Orbit Transfer To Convey The Satellite Control To The Iranian Military
  • Security Implications: This Arrangement Permits The Iranian Military To Utilize Global Distributed Ground Networks To Evade Potential Kinetic Strikes On Domestic Infrastructure
  • Official Denials: The Chinese Ministry Of Foreign Affairs Categorically Denied Reports Linking The Satellite Transfer To The Ongoing Regional Conflict

Iran secretly acquired a Chinese spy satellite that gave the Islamic republic a powerful new capability to target US military bases across the Middle East during the recent war, according to a Financial Times investigation.

Leaked Iranian military documents show the satellite, known as TEE-01B, was acquired by the Islamic Revolutionary Guard Corps’ Aerospace Force in late 2024 after it was launched into space from China.

Time-stamped coordinate lists, satellite imagery and orbital analysis show that Iranian military commanders later tasked the satellite to monitor key US military sites. The images were taken in March before and after drone and missile strikes on those locations.

TEE-01B was built and launched by Earth Eye Co, a Chinese company that says it offers “in-orbit delivery”, a little-known export model under which spacecraft launched in China are transferred to overseas customers after reaching orbit.

As part of the agreement, the IRGC was granted access to commercial ground stations operated by Emposat, a Beijing-based provider of satellite control and data services with a global network spanning Asia, Latin America and other regions.

The use of a Chinese-built satellite by the IRGC during a war where Tehran has repeatedly targeted its neighbours with missiles and drones is likely to be highly sensitive across the region. China is the largest trading partner of the Gulf countries and is the largest buyer of their oil.

The logs show that the satellite captured images of Prince Sultan Air Base in Saudi Arabia on March 13, 14 and 15. On March 14, US President Donald Trump confirmed US planes at the base had been hit. Five US Air Force refuelling planes were damaged.

The satellite also conducted surveillance of the Muwaffaq Salti Air Base in Jordan and locations close to the US Fifth Fleet naval base in Manama, Bahrain, and Erbil airport, Iraq, around the time of IRGC-claimed attacks on facilities in those areas.

Other areas surveilled by the satellite included Camp Buehring and Ali Al Salem air base in Kuwait, the Camp Lemonnier US military base in Djibouti and Duqm International Airport in Oman. Gulf civilian infrastructure monitored included the Khor Fakkan container port area and the Qidfa power and desalination plant in the United Arab Emirates, as well as the Alba facility in Bahrain — one of the world’s largest aluminium smelters.

“This satellite is clearly being used for military purposes, as it is being run by the IRGC’s Aerospace Force and not Iran’s civilian space programme,” said Nicole Grajewski, an expert on Iran at Sciences Po university.

“Iran really needs this foreign-provided capability during this war, as it allows the IRGC to identify targets ahead of time and check the success of its strikes,” she added.

TEE-01B is capable of capturing imagery at roughly half-metre resolution, comparable to high-resolution commercially available western satellite imagery. It represents a significant upgrade on Iran’s domestic capabilities and would allow analysts to identify aircraft, vehicles and changes to infrastructure.

By contrast, the IRGC Aerospace Force’s previously most advanced military satellite — the Noor-3 — was estimated, based on Iranian claims, to capture imagery at about 5 metres resolution, an improvement on the Noor-2 system’s 12-15 metre imagery but still about an order of magnitude less precise than the Chinese-built satellite and insufficient to identify aircraft or monitor activity at military bases.


TEE-01B greatly improves Iran’s surveillance capabilities

Reported resolution of selected Iranian satellites

TEE-01BNoor-3PayaZafar-2

0.5m5m10m15m

Satellite imagery shows military aircraft at Prince Sultan Air Base in Saudi Arabia on March 5 Planet Labs; FT reporting

Earth Eye Co says on its website that it has carried out one “in-orbit” transfer to an unnamed country that was part of China’s Belt and Road Initiative. Iran joined Belt and Road in 2021.

The company says on its website that the satellite was intended to be used for “agriculture, ocean monitoring, emergency management, natural resource supervision, and municipal transportation”.

In September 2024, the IRGC Aerospace Force — which oversees Iran’s ballistic missile, drone and space programmes — agreed to pay about Rmb250mn ($36.6mn) to acquire control over the satellite system, according to the documents seen by the FT.

A gold satellite covered with dark solar panels, standing upright on a stand in front of a dramatically lit background.

An image of the satellite Earth Eye says was transferred to ‘a Belt and Road country’ © Earth Eye Co

The renminbi-denominated agreement, signed by a brigadier general in the IRGC Aerospace Force, breaks down costs, including the satellite and its launcher, technical support, data infrastructure and services provided by a “foreign counterparty”.

Under the agreement, Emposat provides the IRGC with the software and ground network to run the satellite over its lifespan. These would send commands, receive telemetry and imagery, and allow the IRGC to direct the satellite’s operations from anywhere in the world.

“This amounts to a dispersion strategy for Iran’s space assets,” said Jim Lamson, a former CIA analyst focused on Iran and a senior research associate at the James Martin Center for Nonproliferation Studies.

“Iran’s satellite ground stations, which were hit in 2025 and 2026, can be hit very easily by missiles from a thousand miles away. You can’t just hit a Chinese ground station located in another country,” he added.

Israel’s military has said it has struck multiple space and satellite-related targets inside Iran during the current conflict, including the main research centre of the Iranian Space Agency in mid-March.

The IDF said the centre was being used to develop military satellites and intelligence collection, as well as “directing fire toward targets across the Middle East”.

Lamson said the TEE-01B satellite significantly expanded Iran’s ability to monitor US military assets.

“Iran has human intelligence assets around the region surveilling US military bases,” he said. “So if you are an Iranian military planner having a satellite like this available to combine with that, and also with Russian satellite imagery, is a powerful tool.”

LOADING

Video description

The TEE-01B satellite was launched from the Jiuquan Satellite Launch Center in northwest China in June 2024

The TEE-01B satellite was launched from the Jiuquan Satellite Launch Center in northwest China in June 2024 © Reuters

Iran’s expanding use of foreign satellite capabilities comes against a backdrop of deepening co-operation with Russia, which has launched several Iranian satellites in recent years.

China has sought to position its commercial space sector as civilian, even as its technologies are increasingly used in dual-use contexts.

US officials have been closely monitoring Chinese satellite companies believed to be supporting actors in the Middle East that threaten US security. The FT reported last year that Chang Guang Satellite Technology, a commercial group with ties to the Chinese military, provided satellite imagery to the Iran-backed Houthi rebels in Yemen to help them target US warships and international ships in the Red Sea.

Emposat, the Chinese company providing the ground infrastructure for the system, has been identified in a report by the House China committee as having close ties to China’s People’s Liberation Army Aerospace Force, including personnel linked to key satellite launch command centres.

While Emposat is a commercial company, it was founded by Richard Zhao, who spent 15 years working at the China Academy of Space Technology, a government-run organisation. Several of the top executives and engineers at Earth Eye also have connections to some of the Chinese universities that are dubbed the “seven sons of national defence” because of their close collaboration with the People’s Liberation Army, according to the group’s website.

“Emposat is a rising star in China’s commercial space sector, but it’s still a product of the state and military establishment,” said Aidan Powers-Riggs, an expert at the CSIS think-tank who has conducted research on the group. “It was founded by veterans of China’s state-run space programme and bankrolled by investment from national military-civil fusion funds.”

The revelation about the satellite contract comes as the US is more broadly concerned about Chinese help for Iran. Dennis Wilder, the former head of China analysis at the CIA, said China has a history of providing Iran with weapons as part of a pragmatic strategy to influence the Islamic republic on other issues, including in the past sending Silkworm anti-ship missiles that were used to obstruct shipping in the Strait of Hormuz.

One person familiar with the situation said the US had seen indications that China was considering providing Iran with the kind of shoulder-fired missiles that Iran recently used to shoot down an American F-15 fighter jet. The CIA declined to comment on the situation, which was first reported by CNN.

While Emposat operates commercially, analysts say such links underline the blurred boundary between civilian and military space capabilities in China.

“There is no way that any Chinese company could do something like launch a satellite without somebody in the administration giving it the go-ahead,” said one former senior western intelligence official. “I think it’s been very clear for some time that China has been helping the Iranians with intelligence, but trying to keep the hand of government hidden.”

Asked about the results of the FT’s investigation, China’s Ministry of Foreign Affairs said: “The relevant reports are untrue. This is a war that should never have happened, and everyone is clear about its origins. Recently, certain forces have been keen to fabricate rumours and maliciously link them to China. China firmly opposes this kind of ill-intentioned conduct.”

China’s Ministry of Commerce, Earth Eye and Emposat did not respond to requests for comment.

The CIA declined to comment. The White House did not comment specifically on the connection between Emposat and the IRGC. But a spokesperson referred to comments President Donald Trump made at the weekend when he warned that China would face “big problems” if it provided Iran with air defence systems.

Additional reporting by Chris Campbell in London and Joe Leahy in Beijing

Methodology

The FT analysed TEE-01B’s orbit using public US Space Force tracking data and found the satellite was in position to observe the locations at the times listed in documents shared with the FT. In each case, the timestamps, coordinates and sensor angles were consistent with the satellite’s position.

Government statements and media reports show many of the locations were hit in the days before or after the satellite’s passes. The FT identified potential damage using radar imagery from the European Space Agency’s Sentinel-1 satellite and medium-resolution imagery from Sentinel-2.

Read the whole story
bogorad
7 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Scientists reverse brain aging, with a nasal spray – Texas A&M Stories

1 Share
  • Neuroinflammaging Concept: Chronic Brain Inflammation Causes Cognitive Decline And Memory Loss During Aging Processes
  • Innovative Nasal Spray: Texas AM Researchers Developed A Noninvasive Therapeutic Delivery Method For Treating Brain Aging
  • Treatment Mechanism: Extracellular Vesicles Utilize Nasal Absorption To Deliver Regulatory MicroRNA Directly Into Brain Tissue
  • Cellular Restoration: The Therapy Successfully Recharges Mitochondrial Function While Suppressing Inflammatory Signaling Pathways Like NLRP3
  • Observed Outcomes: Clinical Models Demonstrated Significant Improvements In Memory Retention And Environmental Adaptability Within Weeks
  • Universal Applicability: Research Findings Indicate That The Intranasal Treatment Maintains Consistent Effectiveness Across Both Biological Sexes
  • Future Potential: Potential Applications Extend Beyond General Aging To Include Recovery Opportunities For Human Stroke Survivors
  • Patent Milestones: Researchers Have Filed A United States Patent To Facilitate The Translation Of Lab Discoveries Into Clinical Practice

New therapy is turning back the clock in aging brains, healing inflammation, restoring memory and reshaping the future of brain age-related therapies

Credit: Getty Images

Picture this: your brain is a high-performance engine. Over decades, it doesn’t just wear down, it also starts to run hot.

Tiny “fires” of inflammation smolder deep within the brain’s memory center, creating a persistent brain fog that makes it harder to think, form new memories or even adapt to new environments, all the while increasing the risk to disorders like Alzheimer’s disease.

Scientists call this slow burn “neuroinflammaging,” and for decades it was thought to be the inevitable price of growing older.

Until now.

A landmark study from researchers at the Texas A&M University Naresh K. Vashisht College of Medicine suggests the inflammatory tide responsible for brain aging and brain fog might actually be reversible. And the solution doesn’t involve brain surgery, but a simple nasal spray.

Led by Dr. Ashok Shetty, university distinguished professor and associate director of the Institute for Regenerative Medicine, along with senior research scientists Dr. Madhu Leelavathi Narayana and Dr. Maheedhar Kodali, the team developed a nasal spray that, with just two doses, dramatically reduced brain inflammation, restored the brain’s cellular power plants and significantly improved memory.

The most surprising part? It all happened within weeks and lasted for months.

The findings, published in the Journal of Extracellular Vesicles, could reshape the future of neurodegenerative therapies and may even change how scientists think about brain aging itself.

ashok shetty working in lab

Brain age-related diseases like dementia are a major health concern worldwide. What we’re showing is brain aging can be reversed, to help people stay mentally sharp, socially engaged and free from age-related decline.

Dr. Ashok K. Shetty University Distinguished Professor Texas A&M University Naresh K. Vashisht College of Medicine

Brain fog to brain focus, the future of cognitive therapy

The implications of this research could be nothing short of revolutionary.

“As we develop and scale this therapy, a simple, two-dose nasal spray could one day replace invasive, risky procedures or maybe even months of medication,” Shetty said.

The societal impact could be just as profound. In the United States alone, new dementia cases are projected to double over the next four decades, from about 514,000 in 2020 to about 1 million in 2060.

“The trend signals a pressing need for policies and innovative interventions that can minimize both the risk and severity of neurodegenerative disorders like dementia,” Shetty said.

The study also hints at broad applicability, working equally effectively across both genders — a rare outcome in biomedical research.

“It’s universal,” Shetty said. “Treatment outcomes were consistent and similar across both sexes.”

One day, the approach could even help stroke survivors rebuild lost brain function, or slow — even reverse — the effects of cognitive aging in humans.

“Our approach redefines what it means to grow old,” Shetty said. “We’re aiming for successful brain aging: keeping people engaged, alert and connected. Not just living longer, but living smarter and healthier,” Shetty said.

Rewiring the brain from the inside out

At the heart of this groundbreaking development are millions of microscopic biological parcels known as extracellular vesicles (EVs). They act like delivery vehicles, carrying powerful genetic cargo called microRNAs.

“MicroRNAs act like master regulators,” Narayana said. “They help modulate and regulate many gene and signaling pathways in the brain.”

But the delivery route is just as important as the cargo.

Packed into a nasal spray, the tiny EVs bypass the brain’s protective shield and travel directly into brain tissue, where they are absorbed.

Scenes from the Shetty lab, with two students mixing two solutions into a centrifuge.

In Shetty’s lab, researchers develop an innovative nasal spray targeting brain aging.

Credit: Texas A&M University Division of Marketing and Communications

“The mode of delivery is one of the most exciting aspects of our approach,” Kodali said. “Intranasal delivery allows us to reach, and treat, the brain directly without invasive procedures.”

Once absorbed into the brain’s resident immune cells, the microRNAs suppress systems, like NLRP3 inflammasome and the cGAS–STING signaling pathways, known to drive chronic inflammation in aging brains.

At a cellular level, the treatment recharged neuronal mitochondria, or the power plants that live inside the brain’s cells.

By recharging these cellular power plants, the therapy didn’t just clear brain fog, it physically improved the brain’s ability to process and store information.

“We are giving neurons their spark back by reducing oxidative stress and reactivating the brain’s mitochondria,” Narayana said.

Behavioral tests confirmed the biology. Models treated with the nasal spray showed remarkable improvements in not only recognizing familiar objects but also detecting new objects and changes in their environment, a sharp contrast to the control.

“We are seeing the brain’s own repair systems switch on, healing inflammation and restoring itself,” Shetty said.

While further research is needed, Shetty and his team have already filed a U.S. patent for the therapy, marking a milestone in what could become a breakthrough for brain aging treatments.

Behind the breakthrough

Breakthroughs like the one led by Shetty highlight Texas A&M as a research powerhouse, where national and global research priorities help shape the next generation of innovative solutions.

“We aren’t just trying to understand the biological mechanisms, we are translating and developing our findings into real-world therapies that could make a difference,” Shetty said.

Scenes from the Shetty lab, with all researchers standing and facing the camera for an image.

With support from the NIA, discoveries like Shetty’s highlight Texas A&M’s role as a leader in research, where global and national priorities inspire the next wave of innovation.

Credit: Texas A&M University Naresh K. Vashisht College of Medicine

Backed by the National Institute on Aging (NIA), the Texas A&M team pooled collaborative knowledge, expertise and resources to turn a simple nasal spray into a therapy with the potential to reframe how scientists think about brain aging.

“Our partnership with the NIA is very important,” Shetty said. “This kind of work requires resources and the right people to tackle problems and develop solutions that could change lives.”

Ultimately, while the brain’s engine may sputter with age, scientists are now learning how to reignite it, sparking a new era of cognitive health and showing that the clock on brain aging might not just be paused, it can be turned back.

More information: Intranasal Human NSC‑Derived EVs Therapy Can Restrain Inflammatory Microglial Transcriptome, and NLRP3 and cGAS‑STING Signalling, in Aged Hippocampus, Journal of Extracellular Vesicles 15(2): e70232 (2026).

DOI: 10.1002/jev2.70232

Journal information: Journal of Extracellular Vesicles

Read the whole story
bogorad
7 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Anthropic shifts enterprise billing to usage-based pricing

1 Share
  • Pricing Structure Shift: Anthropic has transitioned its enterprise plan from flat seat fees to usage-based billing, requiring customers to pay for Claude, Claude Code, and Cowork at standard API rates.
  • Service Reliability Issues: Claude reported an API uptime of 98.95% over a recent 90-day period, significantly lower than the 99.99% industry standard, leading some enterprise clients to migrate to alternative providers.
  • Compute Resource Constraints: Rising costs and supply shortages for hardware, specifically Blackwell GPUs, alongside increasing power grid demands, have necessitated the removal of subsidized flat-rate access.
  • Agentic Workload Impact: The adoption of automated agentic workflows has caused a substantial surge in token consumption, rendering previous fixed-price subscription models financially unsustainable for the provider.
  • Industry Wide Transition: Reflecting broader market conditions, other major AI organizations are similarly moving away from flat-fee subscriptions toward usage-metered pricing models for compute-heavy services.

Anthropic has restructured its enterprise plan to bill Claude, Claude Code, and Cowork usage separately from seat fees, moving its largest business customers to per-token pricing at standard API rates, according to the company's updated enterprise help documentation. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms. The Information first reported the shift, describing it as tied to a deepening compute crunch that will raise bills for heavy business users significantly.

That is the official version of the story. The unofficial version starts with David Hsu.

David Hsu prefers Claude Opus 4.6. He said so publicly. Then he moved his company off it.

Hsu runs Retool, the software platform. He told the Wall Street Journal that Anthropic's model was better, but the service kept dying. His customers could not ship code when Claude choked. So he picked OpenAI, the inferior option, because the inferior option stayed up.

That is the tell.

Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, according to WSJ reporting. Established cloud providers commit to 99.99. One extra nine does not sound like much until you translate it. At Anthropic's rate, that is roughly 92 hours of downtime a year. At the 99.99 standard, 53 minutes. For an enterprise buyer, that is the difference between a tool and a liability.

And then, quietly, Anthropic changed how it bills.

The Breakdown

  • Anthropic quietly moved enterprise seats to usage-based billing, unbundling Claude, Claude Code, and Cowork from flat fees.
  • Claude API uptime hit 98.95% over 90 days ending April 8, far below the 99.99% standard, pushing Retool founder David Hsu to switch to OpenAI.
  • Revenue tripled from $9B to $30B in four months, but session caps, cache TTL cuts, and OpenClaw metering show the subsidy bleeding.
  • Expect every major AI provider running agentic workloads to move to usage-based billing within six months.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

What Anthropic stopped pretending

The enterprise help center now reads like a concession letter dressed as documentation. "The seat fee only covers access to the platform and doesn't include any usage," it says. "All usage across Claude, Claude Code, and Cowork is billed separately at standard API rates, based on what your team actually consumes." Anthropic is not framing the change as a price hike. The direction is unmistakable anyway. Run rate at the end of 2025 was $9 billion. Run rate today is $30 billion. And the company's message to its biggest customers is: budget more for compute.

That is not a typo. Revenue tripled in four months. The response was a billing restructure, not a victory lap.

The open bar was always a subsidy

For about eighteen months, the AI industry sold a story. The story was not that the models were good. The story was that the pricing was stable.

A $20 Claude Pro subscription gave you access to one of the best coding models in the world, running on compute that cost Anthropic significantly more than you paid. Power users always understood this. The arithmetic was never a secret. A single engineer on Claude Code burns through far more inference than a Pro fee could ever cover. The subscription was an open bar. Anthropic was buying the drinks.

Open bars work until somebody shows up thirsty. In this case, the thirsty somebody is the agent.

Agentic workflows do not sip. They chain tools across steps and run loops without asking. They spawn subagents carrying their own contexts. Cached tokens burn by the hundred thousand. One engineer running Claude Code overnight can consume the token budget of 200 casual chat users. OpenAI, running the same math on its own platform, watched token usage jump from 6 billion per minute in October to 15 billion per minute by late March. That is not growth. That is a break in the load curve.

Get Implicator.ai in your inbox

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers onto three-year contracts. Bank of America expects compute demand to outstrip supply through 2029. PJM, the grid operator for the eastern US, warned Friday that it needs 15 gigawatts of additional power tied to AI by early 2027.

The subsidy stopped being theoretical. It started bleeding.

The open bar closes

Watch the pattern. In late March, Anthropic quietly tightened five-hour session limits for Pro and Max users during peak hours, 5 a.m. to 11 a.m. Pacific, weekdays. About seven percent of users started hitting caps they had not hit before, per the company's own disclosure. Claude Code's prompt-cache time-to-live shifted back from one hour to five minutes in early March, driving up quota burn for long coding sessions. OpenClaw, a popular agent framework, got pulled out of the flat-fee bundle on April 4 and moved into usage-metered billing, with heavy users facing bills potentially 50 times higher.

Each move was defended as a "product change" or an "optimization." None was framed as a price hike. Put them together and the shape is obvious. Anthropic is lifting features out of the subscription, welding meters onto each one, and sending customers the difference.

You can feel the institutional anxiety in the public responses. Anthropic engineers denied, on X and GitHub, that the company had degraded Claude. They are not wrong about the model. They are cornered on the framing. The model is the same. The economics underneath it are not.

One AMD senior director filed a data-heavy GitHub complaint on April 2, analyzing 6,852 Claude Code session files to argue Claude had stopped reasoning as deeply. Anthropic's Claude Code lead walked through her analysis, conceded that thinking-effort defaults had been lowered on March 3 to "the best balance across intelligence, latency and cost," and explained the rest as UI changes. Translate: we dialed down the compute and hoped nobody would notice.

People noticed. That is why the usage-based migration is happening in public now. Enterprise buyers will not accept a silent shrinkflation. They want the meter visible, the billing legible, and someone else to blame when the monthly number gets ugly.

Who pays for the buildout

The shift does real work for Anthropic. It transfers compute cost from balance sheet to invoice. It lets the company keep signing million-dollar accounts without needing to pre-buy the GPU capacity to serve them flat-rate. It gives Anthropic a clean story for Wall Street. Every new revenue dollar is now backed by a compute dollar, metered and paid.

It breaks something else, though. Claude's growth engine was individual developers falling in love with Claude Code on their own dime, dragging it into their startups, then pushing employers to buy enterprise seats. That pipeline runs on a cheap Pro plan with enough headroom to experiment. Tighten the plan, meter the agents, add surprise quota math, and you poison the upstream.

Business Insider spoke with three of those users last week. One restructured his entire workday around limit resets. Another now breaks a single project into four micro-chats to conserve tokens. A third simply stops working when the cap hits, because manual coding feels pointless after Claude has spoiled him. The affection is intact. The relationship is not.

The crack nobody is naming

Here is the part Anthropic will not say out loud. Axios went hunting for a historical comparison and came back empty. Google's search-advertising ramp was the previous record. Anthropic covered nearly four times that ground in a single quarter. Snowflake took a decade to reach a billion in run rate. Anthropic added twenty-nine billion in roughly a year. More than 1,000 customers now pay over a million dollars a year for Claude.

None of those numbers mean anything if the service cannot stay up. 98.95 percent is the crack. Enterprise buyers have been trained by twenty years of cloud discipline to treat that figure as disqualifying, and some are already voting with their wallets. Ramp card data shows Anthropic gaining on OpenAI in business spend, yes. That data lags the service degradation. The next three months test whether customer affection for Claude outruns exhaustion with its downtime.

Expect two things from here. First, every major AI provider running agentic workloads moves to usage-based enterprise billing within six months. OpenAI already started, shifting Codex from flat-message to token metering in early April. GitHub tightened Copilot limits on April 10. Windsurf swapped credits for daily quotas. The flat-fee era is over. The memo is circulating.

Second, Anthropic keeps telling two stories at once. The growth story for investors goes like this. Thirty billion dollars, 1,000 million-dollar accounts, a three-year curve that would embarrass Rockefeller. The rationing story for users lives somewhere else entirely. Capacity tightens during peak hours. Effort defaults quietly dropped in March. Session caps hit sooner than they used to. OpenClaw got its own meter. Both stories are true. Neither is sustainable without the other eventually catching up.

David Hsu made his call already. He kept the inferior model because it stayed up. That is the verdict that should scare Anthropic more than any benchmark screenshot on X. It is not a performance problem. It is a trust problem. You cannot patch trust with a pricing page.

Frequently Asked Questions

What actually changed in Anthropic's enterprise pricing?

The enterprise plan's seat fee now only covers platform access. Usage across Claude, Claude Code, and Cowork is billed separately at standard API rates based on actual consumption. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms.

Why did Retool's founder switch from Claude to OpenAI?

David Hsu told the Wall Street Journal he preferred Anthropic's Opus 4.6 model, but the service kept going down. Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, compared to the 99.99 percent standard that established cloud providers maintain. That gap translates to roughly 100 hours of downtime a year versus 50 minutes.

What is the compute crunch behind the pricing shift?

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers into three-year contracts. Bank of America expects demand to outstrip supply through 2029. PJM, the eastern US grid operator, is looking for 15 gigawatts of additional power tied to AI by early 2027.

What did Anthropic change about Claude Code sessions?

In late March, Anthropic tightened five-hour session limits for Pro and Max users during weekday peak hours of 5 a.m. to 11 a.m. Pacific. About 7 percent of users started hitting caps they had not hit before. Claude Code's prompt-cache time-to-live shifted from one hour back to five minutes in early March, driving up quota burn for long coding sessions.

Are other AI providers moving to usage-based billing too?

Yes. OpenAI shifted Codex from flat-message pricing to token metering in early April and introduced a new $100 Pro tier for compute-heavy coding. GitHub tightened Copilot limits on April 10. Windsurf replaced its credit system with daily and weekly quotas in March. The flat-fee subscription era for agentic workloads is effectively over.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

[

The Intelligence Arbitrage Is Closing

Implicator PRO Briefing / 24 Mar 2026   Pro Members Only AI token prices crashed 99% in three years. Six providers sell comparable intelligence at commodity rates. This analysis maps the fi

The Implicator

](https://www.implicator.ai/the-intelligence-arbitrage-is-closing/)

[

Seven Chips Ship. Seventy Percent Walk.

San Francisco | March 17, 2026 Jensen Huang held up a chip yesterday in San Jose. Then he held up six more. Nvidia's Vera Rubin platform ships seven processors as one organism, backed by $1 trillion

The Implicator

](https://www.implicator.ai/seven-chips-ship-seventy-percent-walk/)

[

Anthropic and Google circle a cloud pact measured in tens of billions

A compute-for-TPUs deal would anchor Anthropic on Google Cloud while keeping AWS firmly in the frame. A six-figure hourly compute bill meets a hyperscaler eager for flagship AI workloads. Anthropic i

The Implicator

](https://www.implicator.ai/anthropic-and-google-circle-a-cloud-pact-measured-in-tens-of-billions/)

Analysis

Share X / Twitter LinkedIn Email

Marcus Schuler

Marcus Schuler

San Francisco

Editor-in-Chief and founder of Implicator.ai. Former ARD correspondent and senior broadcast journalist with 10+ years covering tech. Writes daily briefings on policy and market developments. Based in San Francisco. E-mail: editor@implicator.ai

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Is Anthropic 'nerfing' Claude? Users increasingly report performance degradation as leaders push back | VentureBeat

1 Share
  • User Accusations: Developers And AI Power Users Claim That Claude Model Performance Has Diminished Recently
  • Shrinkflation Concerns: Public Discourse Suggests Customers Are Receiving Reduced Model Capability For The Same Financial Cost
  • Detailed Analysis: An AMD Official Conducted A Comprehensive Study Of Claude Code Logs Suggesting A Regression In Reasoning Depth
  • Corporate Denial: Anthropic Representatives Maintain That The Model Weights Remain Unchanged Regardless Of User Perception
  • Product Adjustments: The Company Acknowledges Modifying Default Effort Levels And Interface Elements To Balance Latency And Costs
  • Benchmark Disputes: Viral Claims Regarding Model Hallucinations Have Faced Peer Rebuttal Due To Inconsistent Comparison Methodology
  • Usage Restrictions: Confirmed Changes To Session Limits And Cache Settings Have Intensified User Suspicions Regarding System Performance
  • Trust Deficit: A Growing Gap Exists Between Internal Product Definitions Of Model Quality And The Practical Experiences Of Professional Users

A growing number of developers and AI power users are taking to social media to accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code — intentionally or as an outcome of compute limits — arguing that the company’s flagship coding model feels less capable, less reliable and more wasteful with tokens than it did just weeks ago.

The complaints have spread quickly on Github, X and Reddit over the past several weeks, with several high-reach posts alleging that Claude has become worse at sustained reasoning, more likely to abandon tasks midway through, and more prone to hallucinations or contradictions.

Some users have framed the issue as “AI shrinkflation” — the idea that customers are paying the same price for a weaker product.

Others have gone further, suggesting Anthropic may be throttling or otherwise tuning Claude downward during periods of heavy demand.

Those claims remain unproven, and Anthropic employees have publicly denied that the company degrades models to manage capacity. At the same time, Anthropic has acknowledged real changes to usage limits and reasoning defaults in recent weeks, which has made the broader debate more combustible.

VentureBeat has reached out to Anthropic for further clarification on the recent accusations, including whether any recent changes to reasoning defaults, context handling, throttling behavior, inference parameters or benchmark methodology could help explain the spike in complaints.

We have also asked how Anthropic explains the recent benchmark-related claims and whether it plans to publish additional data that could reassure customers. An Anthropic spokesperson did not address the questions individually, instead referring us to X posts by Claude Code creator Boris Cherny and Claude Code team member Thariq Shihipar regarding Opus 4.6 performance and usage limits, respectively. Both X posts are also referenced and linked below.

Viral user complaints, including from an AMD Senior Director, argue Claude has become less capable

One of the most detailed public complaints originated as a GitHub issue filed by Stella Laurenzo on April 2, 2026, whose LinkedIn profile identifies her as Senior Director in AMD’s AI group.

In that post, Laurenzo wrote that Claude Code had regressed to the point that it could not be trusted for complex engineering work, then backed that claim with a sprawling analysis of 6,852 Claude Code session files, 17,871 thinking blocks and 234,760 tool calls.

The complaint argued that, starting in February, Claude’s estimated reasoning depth fell sharply while signs of poorer performance rose alongside it, including more premature stopping, more “simplest fix” behavior, more reasoning loops, and a measurable shift from research-first behavior to edit-first behavior.

The post’s broader point was that for advanced engineering workflows, extended reasoning is not a luxury but part of what makes the model usable in the first place.

That GitHub thread then escaped into the broader social media conversation, with X users including @Hesamation, who posted screenshots of Laurenzo's GitHub post to X on April 11, turning it into an even more viral talking point.

That amplification mattered because it gave the wider “Claude is getting worse” narrative something more concrete than anecdotal frustration: a long, data-heavy post from a senior AI leader at a major chip company arguing that the regression was visible in logs, tool-use patterns and user corrections, not just gut feeling.

Anthropic’s public response focused on separating perceived changes from actual model degradation. In a pinned follow-up on the same GitHub issue posted a week ago, Claude Code lead Boris Cherny thanked Laurenzo for the care and depth of the analysis but disputed its main conclusion.

Cherny said the “redact-thinking-2026-02-12” header cited in the complaint is a UI-only change that hides thinking from the interface and reduces latency, but “does not impact thinking itself,” “thinking budgets,” or how extended reasoning works under the hood.

He also said two other product changes likely affected what users were seeing: Opus 4.6’s move to adaptive thinking by default on Feb. 9, and a March 3 shift to medium effort, or effort level 85, as the default for Opus 4.6, which he said Anthropic viewed as the best balance across intelligence, latency and cost for most users.

Cherny added that users who want more extended reasoning can manually switch effort higher by typing /effort high in Claude Code terminal sessions.

That exchange gets at the core of the controversy. Critics like Laurenzo argue that Claude’s behavior in demanding coding workflows has plainly worsened and point to logs and usage patterns as evidence.

Anthropic, by contrast, is not saying nothing changed. It is saying the biggest recent changes were product and interface choices that affect what users see and how much effort the system expends by default, not a secret downgrade of the underlying model. That distinction may be technically important, but for power users who feel the product is delivering worse results, it is not necessarily a satisfying one.

External coverage from TechRadar and PC Gamer further amplified Laurenzo's post and larger wave of agreement from some power users.

Another viral post on X from developer Om Patel on April 7 made the same argument in even more direct terms, claiming that someone had “actually measured” how much “dumber” Claude had gotten and summarizing the result as a 67% drop.

That post helped popularize the “AI shrinkflation” label and pushed the controversy beyond hard-core Claude Code users into the broader AI discourse on X.

These claims have resonated because they map closely onto what many frustrated users say they are seeing in practice: more unfinished tasks, more backtracking, more token burn and a stronger sense that Claude is less willing to reason deeply through complicated coding jobs than it was earlier this year.

Benchmark posts turned anecdotal frustration into a public controversy

The loudest benchmark-based claim came from BridgeMind, which runs the BridgeBench hallucination benchmark. On April 12, the account posted that Claude Opus 4.6 had fallen from 83.3% accuracy and a No. 2 ranking in an earlier result to 68.3% accuracy and No. 10 in a new retest, calling that proof that “Claude Opus 4.6 is nerfed.”

That post spread widely and became one of the main anchors for the broader public case that Anthropic had degraded the model.

Other users also circulated benchmark-related or test-based posts suggesting that Opus 4.6 was underperforming versus Opus 4.5 in practical coding tasks.

Still other posts pointed to TerminalBench-related results as supposed evidence that the model’s behavior had changed in certain harnesses or product contexts.

The effect was cumulative: benchmark screenshots, side-by-side tests and anecdotal frustration all began reinforcing one another in public.

That matters because benchmark claims tend to travel farther than more subjective complaints. A developer saying a model “feels worse” is one thing. A screenshot showing a ranking drop from No. 2 to No. 10, or a dramatic percentage swing in accuracy, gives the appearance of hard proof, even when the underlying comparison may be more complicated.

Critics of the benchmark claims say the evidence is weaker than it looks

The most important rebuttal to the BridgeBench claim did not come from Anthropic. It came from Paul Calcraft, an outside software and AI researcher on X, who argued that the viral comparison was misleading because the earlier Opus 4.6 result was based on only six tasks while the later one was based on 30.

In his words, it was a “DIFFERENT BENCHMARK.” He also said that on the six tasks the two runs shared in common, Claude’s score moved only modestly, from 87.6% previously to 85.4% in the later run, and that the bigger swing appeared to come mostly from a single fabrication result without repeats. He characterized that as something that could easily fall within ordinary statistical noise.

That outside rebuttal matters because it undercuts one of the cleanest and most viral claims in circulation. It does not prove users are wrong to think something has changed. But it does suggest that at least some of the benchmark evidence now driving the story may be overstated, poorly normalized or not directly comparable.

Even the BridgeBench post itself drew a community note to similar effect. The note said the two benchmark runs covered different scopes — six tasks in one case and 30 in the other — and that the common-task subset showed only a minor change. That does not make the later result meaningless, but it weakens the strongest version of the “BridgeBench proved it” argument.

This is now a key feature of the controversy: the claims are not all equally strong. Some are grounded in first-hand user experience. Some point to real product changes. Some rely on benchmark comparisons that may not be apples-to-apples. And some depend on inferences about hidden system behavior that users outside Anthropic cannot directly verify.

Earlier capacity limits gave users a reason to suspect more changes under the hood

The current backlash also lands in the shadow of a real, confirmed Anthropic policy change from late March. On March 26, Anthropic technical staffer Thariq Shihipar posted that, “To manage growing demand for Claude,” the company was adjusting how 5-hour session limits work for Free, Pro and Max subscribers during peak hours, while keeping weekly limits unchanged.

He added that during weekdays from 5 a.m. to 11 a.m. Pacific time, users would move through their 5-hour session limits faster than before. In follow-up posts, he said Anthropic had landed efficiency wins to offset some of the impact, but that roughly 7% of users would hit session limits they would not have hit before, particularly on Pro tiers.

In an email on March 27, 2026, Anthropic told VentureBeat that Team and Enterprise customers were not affected by those changes, and that the shift was not dynamically optimized per user but instead applied to the peak-hour window the company had publicly described. Anthropic also said it was continuing to invest in scaling capacity.

Those comments were about session limits, not model downgrades. But they are important context, because they establish two things that users now keep connecting in public: first, Anthropic has been dealing with surging demand; second, it has already changed how usage is rationed during busy periods. That does not prove Anthropic reduced model quality. It does help explain why so many users are primed to believe something else may also have changed.

Prompt caching and TTL

A separate, more recent GitHub issue broadens the dispute beyond model quality and into pricing and quota behavior. In issue #46829, user seanGSISG argued that Claude Code’s prompt-cache time-to-live, or TTL, appeared to shift from a one-hour setting back to a five-minute setting in early March, based on analysis of nearly 120,000 API calls drawn from Claude Code session logs across two machines.

The complaint argues that this change drove meaningful increases in cache-creation costs and quota burn, especially for long-running coding sessions where cached context expires quickly and must be rebuilt. The author claims that this helps explain why some subscription users began hitting usage limits they had not previously encountered.

What makes this issue notable is that Anthropic did not flatly deny that something changed. In a reply on the thread, Jarred Sumner said the March 6 change was real and intentional, but rejected the framing that it was a regression. He said Claude Code uses different cache durations for different request types, and that one-hour cache is not always cheaper because one-hour writes cost more up front and only save money when the same cached context is reused enough times to justify it.

In his telling, the change was part of ongoing cache optimization work, not a silent downgrade, and the pre–March 6 behavior described in the issue “wasn’t the intended steady state.”

The thread later drew a more detailed response from Anthropic’s Cherny, who described one-hour caching as “nuanced” and said the company has been testing heuristics to improve cache hit rates, token usage and latency for subscribers. Cherny said Anthropic keeps five-minute cache for many queries, including subagents that are rarely resumed, and said turning off telemetry also disables experiment gates, which can cause Claude Code to fall back to a five-minute default in some cases.

He added that Anthropic plans to expose environment variables that let users force one-hour or five-minute cache behavior directly. Together, those replies do not validate the issue author’s claim that Anthropic silently made Claude Code more expensive overall, but they do confirm that Anthropic has been actively experimenting with cache behavior behind the scenes during the same period users began complaining more loudly about quota burn and changing product behavior.

Anthropic says user-facing changes, not secret degradation, explain much of the uproar

Anthropic-affiliated employees have publicly pushed back on the broadest accusations. In one widely circulated reply on X, Cherny responded to claims that Anthropic had secretly nerfed Claude Code by writing, “This is false.”

He said Claude Code had been defaulted to medium effort in response to user feedback that Claude was consuming too many tokens, and that the change had been disclosed both in the changelog and in a dialog shown to users when they opened Claude Code.

That response is notable because it concedes a meaningful product change while rejecting the more conspiratorial interpretation of it. Anthropic is not saying nothing changed. It is saying that what changed was disclosed and was aimed at balancing token use, not secretly reducing model quality.

Public documentation also supports the fact that effort defaults have been in motion. Claude Code’s changelog says that on April 7, Anthropic changed the default effort level from medium to high for API-key users as well as Bedrock, Vertex, Foundry, Team and Enterprise users.

That suggests Anthropic has actively been tuning these settings across different segments, which could plausibly affect user perceptions even if the core model weights are unchanged.

Shihipar has also directly denied the broader demand-management accusation. In a reply on X posted April 11, he said Anthropic does not “degrade” its models to better serve demand. He also said that changes to thinking summaries affected how some users were measuring Claude’s “thinking,” and that the company had not found evidence backing the strongest qualitative claims now spreading online.

The real issue may be trust as much as model quality

What is clear is that a trust gap has opened between Anthropic and some of its most demanding users.

For developers who rely on Claude Code all day, subtle shifts in visible thinking output, effort defaults, token burn, latency tradeoffs or usage caps can feel indistinguishable from a weaker model.

That is true whether the root cause is a product setting, a UI change, an inference-policy tweak, capacity pressure or a genuine quality regression.

It also means both sides of the fight may be talking past each other. Users are describing what they experience: more friction, more failures and less confidence. Anthropic is responding in product terms: effort defaults, hidden thinking summaries, changelog disclosures, and denials that demand pressure is causing secret model degradation.

Those are not necessarily incompatible descriptions. A model can feel worse to users even if the company believes it has not “nerfed” the underlying model in the way critics allege. But coming at a time when Anthropic's chief rival OpenAI has recently pivoted and put more resources behind its competing, enterprise and vibe-coding focused product Codex — even offering a new, more mid-range ChatGPT subscription in an effort to boost usage of the tool — it's certainly not the kind of publicity that stands to benefit Anthropic or its customer retention.

At the same time, the public evidence remains mixed. Some of the most viral claims have come from developers with detailed logs and strong opinions based on repeated use. Some of the benchmark evidence has been challenged by outside observers on methodological grounds. And Anthropic’s own recent changes to limits and settings ensure that this debate is happening against a backdrop of real adjustments, not pure rumor.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

E3 joint statement on Iran: activation of the snapback - GOV.UK

1 Share
  • Nuclear Proliferation Constraints: The E3 maintains the strategic objective of preventing Iran from acquiring or developing nuclear weapons through international oversight.
  • Snapback Mechanism Implementation: UN Security Council resolutions from 2006 to 2010 were successfully re-instated on 27 September 2025 due to consistent Iranian non-compliance.
  • Documented Program Violations: IAEA reports indicate that Iran possesses enriched uranium stockpiles 48 times the agreed limits and lacks credible civilian justifications for current activity.
  • Diplomatic Engagement Failures: Multiple attempts at negotiations, including dispute resolution mechanisms and specific conditional offers, were rejected or ignored by the Iranian government.
  • Sanctions Enforcement Mandate: Member states are now urged to enforce the re-applied restrictive measures to ensure international nuclear non-proliferation obligations are met.

We, the Foreign Ministers of France, Germany and the United Kingdom (the E3), continue to share the fundamental objective that Iran shall never seek, acquire or develop a nuclear weapon. With this objective in mind, our countries agreed first the Joint Plan of Action (JPoA) in 2013 and subsequently the Joint Comprehensive Plan of Action (JCPoA) in 2015, together with the United States, Russia and China. And it is due to Iran’s persistent and significant non-performance of its JCPoA commitments that we triggered the snapback mechanism on 28 August 2025.

We welcome the re-instatement since 20:00 EDT (00:00 GMT) on 27 September 2025 of Resolutions 1696 (2006), 1737 (2006), 1747 (2007), 1803 (2008), 1835 (2008), and 1929 (2010) after completion of the snapback process as provided for in UN Security Council Resolution 2231. We urge Iran and all states to abide fully by these resolutions.

These resolutions are not new: they contain a set of sanctions and other restrictive measures that were previously imposed by the UN Security Council and relate to Iran’s proliferation activities. Those measures were lifted by the Council in the context of the JCPoA, at a time when Iran had committed to ensuring its nuclear programme was exclusively peaceful. Given that Iran repeatedly breached these commitments, the E3 had no choice but to trigger the snapback procedure, at the end of which those resolutions were brought back into force. 

Since 2019, Iran has exceeded all limits on its nuclear programme that it had freely committed to under the JCPoA. According to the IAEA’s report of 4 September 2025, Iran holds a quantity of enriched uranium which is 48 times the JCPoA limit. Today, Iran’s stockpile is entirely outside of IAEA monitoring. This includes 10 ‘Significant Quantities’ of High Enriched Uranium (HEU) – 10 times the approximate amount of nuclear material for which the possibility of manufacturing a nuclear explosive device cannot be excluded. Iran has no credible civilian justification whatsoever for its HEU stockpile. No other country without a nuclear weapons programme enriches uranium to such levels and at this scale.

Despite these long-standing violations, the E3 have continuously made every effort to avoid triggering snapback, bring Iran back into compliance and reach a durable and comprehensive diplomatic resolution. We triggered the JCPoA’s dispute resolution mechanism in January 2020 as acknowledged by the JCPoA coordinator. In 2020 and 2021 we engaged in months of talks with the aim of fully restoring the JCPoA and returning the United States to the deal. Instead, Iran chose to reject two offers put on the table by the JCPoA coordinator in 2022 and to further expand its nuclear activities in clear breach of its JCPoA commitments.

In July 2025, we offered Iran a limited, one-time snapback extension provided that Iran agreed to resume direct and unconditional negotiations with the United States, return to compliance with its legally binding safeguards obligations, and address its high enriched uranium stockpile. These measures were fair and achievable. Iran did not engage seriously with this offer.

On 28 August, in view of Iran’s continued nuclear escalation, France, Germany and the United Kingdom initiated the “snapback” mechanism as a last resort, in accordance with paragraph 11 of Security Council Resolution 2231. This began a 30-day process designed to give Iran an opportunity to address concerns over its nuclear programme. Our snapback extension offer remained on the table during that period.

Regrettably, Iran did not take the necessary actions to address our concerns, nor to meet our asks on extension, despite extensive dialogue, including during United Nations High-Level Week. In particular, Iran has not authorised IAEA inspectors to regain access to Iran’s nuclear sites, nor has it produced and transmitted to the IAEA a report accounting for its stockpile of high-enriched uranium.

On 19 September, in accordance with UNSCR 2231, the Security Council voted on a resolution that would have maintained sanctions-lifting on Iran. The outcome of the vote was an unambiguous no. This decision sent a clear signal that all states must abide by their international commitments and obligations regarding nuclear non-proliferation.

France, Germany and the United Kingdom are now focusing, as a matter of urgency, on the swift reintroduction of restrictions reapplied by these resolutions, in accordance with our obligations as UN member states. We urge all UN member states to implement these sanctions.

Our countries will continue to pursue diplomatic routes and negotiations. The reimposition of UN sanctions is not the end of diplomacy. We urge Iran to refrain from any escalatory action and to return to compliance with its legally binding safeguards obligations. The E3 will continue to work with all parties towards a new diplomatic solution to ensure Iran never gets a nuclear weapon.

Media enquiries

Email newsdesk@fcdo.gov.uk

Telephone 020 7008 3100

Email the FCDO Newsdesk (monitored 24 hours a day) in the first instance, and we will respond as soon as possible.

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Early weight gain can have lifelong consequences | Lund University

1 Share
  • Study Scope: Researchers tracked over 600,000 individuals between the ages of 17 and 60 to analyze the impact of weight fluctuations on long-term mortality.
  • Early Onset Risks: Rapid weight gain in early adulthood correlates with a approximately 70 percent higher risk of premature death from obesity-related conditions compared to individuals who remain at a healthy weight.
  • Biological Exposure: The increased mortality risk is partially attributed to the prolonged duration of exposure to the physiological effects of excess body mass.
  • Cancer Exceptions: Weight gain patterns in women show a different correlation regarding cancer risk, suggesting that hormonal shifts, such as those occurring during menopause, may influence health outcomes independently of duration.
  • Methodological Robustness: The research utilized objective weight measurements taken by healthcare professionals across multiple intervals, providing more reliable data than self-reported historical weight tracking.

When in life we gain weight can have a significant impact on our health many years later. In a study involving over 600,000 people, researchers at Lund University in Sweden have investigated how changes in weight between the ages of 17 and 60 are linked to the risk of dying from various diseases. The results show a clear pattern: weight gain early in adulthood has the greatest impact.

It has long been known that obesity increases the risk of several diseases. In this new study, researchers have instead investigated how changes in weight over the course of adulthood affect health.  
 
“The most consistent finding is that weight gain at a younger age is linked to a higher risk of premature death later in life, compared with people who gain less weight,” says Tanja Stocks, Associate Professor of Epidemiology at Lund University. She is one of the researchers behind the study, which has now been published in eClinicalMedicine. 

Large-scale data and long-term tracking

The study is based on data from over 600,000 people, who were tracked via various registers. To be included in the study, participants needed to have had their weight assessed on at least three occasions, for example during early pregnancy, at military conscription, or as a participant of a research study. During the period studied by the researchers, 86,673 of the men and 29,076 of the women died.    

The researchers analysed how weight changed between the ages of 17 and 60 and how this was linked to the risk of death overall and from various obesity-related diseases (see fact box). On average, both men and women gained 0.4 kg per year. 

Early weight gain and increased health risks

The results show that people who gained weight more rapidly over this adult life course had a higher risk of dying from various obesity-related diseases examined by the researchers. People with obesity onset between the ages of 17 and 29 had an approximately 70 per cent higher risk of premature death compared with those who did not develop obesity before age 60. Obesity onset was defined as the first time a person’s body mass index, a measure based on weight and height (kg/m²), reached 30 or higher. 
 
“One possible explanation for why people with early obesity onset are at greater risk is their longer period exposed to the biological effects of excess weight,” says Huyen Le, doctoral student at Lund University and first author of the study.  

However, the pattern differed in one instance: when it came to cancer in women.   

“The risk was roughly the same regardless of when the weight gain occurred. If long-term exposure to obesity were the underlying risk factor, earlier weight gain should imply a higher risk. The fact that this is not the case suggests that other biological mechanisms may also play a role in cancer risk and survival in women,” says Huyen Le.  

One possible explanation could be hormonal changes associated with menopause.

“If our findings among women reflect what happens during menopause, the question is which came first: the chicken or the egg? It may be that hormonal changes affect weight and the age and duration over which these changes occur – and that weight simply reflects what’s happening in the body.”  

Study strengths and implications for public health

One strength of the study is that it is based on multiple weight measurements per individual, which enabled the researchers to estimate weight changes over decades of adulthood. Most other studies lack such data, and they also largely rely on self-reported recalled weights at a younger age.    

“The majority of weight measurements in this study were, instead, taken by staff, for example in healthcare settings. The predominance of objectively measured weights in our study contributes to more reliable and robust results,” says Tanja Stocks. 

Increases in risk within a population can sometimes be difficult to interpret. For example, a 70 per cent increase in risk means that if 10 out of 1,000 people in the reference group die over a given period, approximately 17 out of 1,000 would die in the group with early obesity.    

“But we shouldn’t get too hung up on exact risk figures. They are rarely entirely accurate, as they are influenced, for example, by the factors taken into account in the study and the accuracy with which both risk factors and outcomes have been measured. However, it’s important to recognise the patterns, and this study sends an important message to decision-makers and politicians regarding the importance of preventing obesity,” says Tanja Stocks. 

Many researchers today refer to an “obesogenic society”, in which the environment hinders healthy lifestyles and promotes the development of obesity.   

“It’s up to policymakers to implement measures that we know are effective in combating obesity. This study provides further evidence that such measures are likely to have a positive impact on people’s health.” 

Facts: Obesity-related diseases 

Obesity is linked to an increased risk of several diseases. Some of the most important are: 

• Cardiovascular disease (most forms, e.g. heart attack and stroke) 
• Type 2 diabetes 
• High blood pressure 
• Fatty liver disease (non-alcohol-related) 
• Several types of cancer (e.g. cancer of the colon, liver, kidney, uterus and breast cancer afterfollowing menopause)

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories