Strategic Initiatives
12203 stories
·
45 followers

Anthropic shifts enterprise billing to usage-based pricing

1 Share
  • Pricing Structure Shift: Anthropic has transitioned its enterprise plan from flat seat fees to usage-based billing, requiring customers to pay for Claude, Claude Code, and Cowork at standard API rates.
  • Service Reliability Issues: Claude reported an API uptime of 98.95% over a recent 90-day period, significantly lower than the 99.99% industry standard, leading some enterprise clients to migrate to alternative providers.
  • Compute Resource Constraints: Rising costs and supply shortages for hardware, specifically Blackwell GPUs, alongside increasing power grid demands, have necessitated the removal of subsidized flat-rate access.
  • Agentic Workload Impact: The adoption of automated agentic workflows has caused a substantial surge in token consumption, rendering previous fixed-price subscription models financially unsustainable for the provider.
  • Industry Wide Transition: Reflecting broader market conditions, other major AI organizations are similarly moving away from flat-fee subscriptions toward usage-metered pricing models for compute-heavy services.

Anthropic has restructured its enterprise plan to bill Claude, Claude Code, and Cowork usage separately from seat fees, moving its largest business customers to per-token pricing at standard API rates, according to the company's updated enterprise help documentation. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms. The Information first reported the shift, describing it as tied to a deepening compute crunch that will raise bills for heavy business users significantly.

That is the official version of the story. The unofficial version starts with David Hsu.

David Hsu prefers Claude Opus 4.6. He said so publicly. Then he moved his company off it.

Hsu runs Retool, the software platform. He told the Wall Street Journal that Anthropic's model was better, but the service kept dying. His customers could not ship code when Claude choked. So he picked OpenAI, the inferior option, because the inferior option stayed up.

That is the tell.

Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, according to WSJ reporting. Established cloud providers commit to 99.99. One extra nine does not sound like much until you translate it. At Anthropic's rate, that is roughly 92 hours of downtime a year. At the 99.99 standard, 53 minutes. For an enterprise buyer, that is the difference between a tool and a liability.

And then, quietly, Anthropic changed how it bills.

The Breakdown

  • Anthropic quietly moved enterprise seats to usage-based billing, unbundling Claude, Claude Code, and Cowork from flat fees.
  • Claude API uptime hit 98.95% over 90 days ending April 8, far below the 99.99% standard, pushing Retool founder David Hsu to switch to OpenAI.
  • Revenue tripled from $9B to $30B in four months, but session caps, cache TTL cuts, and OpenClaw metering show the subsidy bleeding.
  • Expect every major AI provider running agentic workloads to move to usage-based billing within six months.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

What Anthropic stopped pretending

The enterprise help center now reads like a concession letter dressed as documentation. "The seat fee only covers access to the platform and doesn't include any usage," it says. "All usage across Claude, Claude Code, and Cowork is billed separately at standard API rates, based on what your team actually consumes." Anthropic is not framing the change as a price hike. The direction is unmistakable anyway. Run rate at the end of 2025 was $9 billion. Run rate today is $30 billion. And the company's message to its biggest customers is: budget more for compute.

That is not a typo. Revenue tripled in four months. The response was a billing restructure, not a victory lap.

The open bar was always a subsidy

For about eighteen months, the AI industry sold a story. The story was not that the models were good. The story was that the pricing was stable.

A $20 Claude Pro subscription gave you access to one of the best coding models in the world, running on compute that cost Anthropic significantly more than you paid. Power users always understood this. The arithmetic was never a secret. A single engineer on Claude Code burns through far more inference than a Pro fee could ever cover. The subscription was an open bar. Anthropic was buying the drinks.

Open bars work until somebody shows up thirsty. In this case, the thirsty somebody is the agent.

Agentic workflows do not sip. They chain tools across steps and run loops without asking. They spawn subagents carrying their own contexts. Cached tokens burn by the hundred thousand. One engineer running Claude Code overnight can consume the token budget of 200 casual chat users. OpenAI, running the same math on its own platform, watched token usage jump from 6 billion per minute in October to 15 billion per minute by late March. That is not growth. That is a break in the load curve.

Get Implicator.ai in your inbox

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers onto three-year contracts. Bank of America expects compute demand to outstrip supply through 2029. PJM, the grid operator for the eastern US, warned Friday that it needs 15 gigawatts of additional power tied to AI by early 2027.

The subsidy stopped being theoretical. It started bleeding.

The open bar closes

Watch the pattern. In late March, Anthropic quietly tightened five-hour session limits for Pro and Max users during peak hours, 5 a.m. to 11 a.m. Pacific, weekdays. About seven percent of users started hitting caps they had not hit before, per the company's own disclosure. Claude Code's prompt-cache time-to-live shifted back from one hour to five minutes in early March, driving up quota burn for long coding sessions. OpenClaw, a popular agent framework, got pulled out of the flat-fee bundle on April 4 and moved into usage-metered billing, with heavy users facing bills potentially 50 times higher.

Each move was defended as a "product change" or an "optimization." None was framed as a price hike. Put them together and the shape is obvious. Anthropic is lifting features out of the subscription, welding meters onto each one, and sending customers the difference.

You can feel the institutional anxiety in the public responses. Anthropic engineers denied, on X and GitHub, that the company had degraded Claude. They are not wrong about the model. They are cornered on the framing. The model is the same. The economics underneath it are not.

One AMD senior director filed a data-heavy GitHub complaint on April 2, analyzing 6,852 Claude Code session files to argue Claude had stopped reasoning as deeply. Anthropic's Claude Code lead walked through her analysis, conceded that thinking-effort defaults had been lowered on March 3 to "the best balance across intelligence, latency and cost," and explained the rest as UI changes. Translate: we dialed down the compute and hoped nobody would notice.

People noticed. That is why the usage-based migration is happening in public now. Enterprise buyers will not accept a silent shrinkflation. They want the meter visible, the billing legible, and someone else to blame when the monthly number gets ugly.

Who pays for the buildout

The shift does real work for Anthropic. It transfers compute cost from balance sheet to invoice. It lets the company keep signing million-dollar accounts without needing to pre-buy the GPU capacity to serve them flat-rate. It gives Anthropic a clean story for Wall Street. Every new revenue dollar is now backed by a compute dollar, metered and paid.

It breaks something else, though. Claude's growth engine was individual developers falling in love with Claude Code on their own dime, dragging it into their startups, then pushing employers to buy enterprise seats. That pipeline runs on a cheap Pro plan with enough headroom to experiment. Tighten the plan, meter the agents, add surprise quota math, and you poison the upstream.

Business Insider spoke with three of those users last week. One restructured his entire workday around limit resets. Another now breaks a single project into four micro-chats to conserve tokens. A third simply stops working when the cap hits, because manual coding feels pointless after Claude has spoiled him. The affection is intact. The relationship is not.

The crack nobody is naming

Here is the part Anthropic will not say out loud. Axios went hunting for a historical comparison and came back empty. Google's search-advertising ramp was the previous record. Anthropic covered nearly four times that ground in a single quarter. Snowflake took a decade to reach a billion in run rate. Anthropic added twenty-nine billion in roughly a year. More than 1,000 customers now pay over a million dollars a year for Claude.

None of those numbers mean anything if the service cannot stay up. 98.95 percent is the crack. Enterprise buyers have been trained by twenty years of cloud discipline to treat that figure as disqualifying, and some are already voting with their wallets. Ramp card data shows Anthropic gaining on OpenAI in business spend, yes. That data lags the service degradation. The next three months test whether customer affection for Claude outruns exhaustion with its downtime.

Expect two things from here. First, every major AI provider running agentic workloads moves to usage-based enterprise billing within six months. OpenAI already started, shifting Codex from flat-message to token metering in early April. GitHub tightened Copilot limits on April 10. Windsurf swapped credits for daily quotas. The flat-fee era is over. The memo is circulating.

Second, Anthropic keeps telling two stories at once. The growth story for investors goes like this. Thirty billion dollars, 1,000 million-dollar accounts, a three-year curve that would embarrass Rockefeller. The rationing story for users lives somewhere else entirely. Capacity tightens during peak hours. Effort defaults quietly dropped in March. Session caps hit sooner than they used to. OpenClaw got its own meter. Both stories are true. Neither is sustainable without the other eventually catching up.

David Hsu made his call already. He kept the inferior model because it stayed up. That is the verdict that should scare Anthropic more than any benchmark screenshot on X. It is not a performance problem. It is a trust problem. You cannot patch trust with a pricing page.

Frequently Asked Questions

What actually changed in Anthropic's enterprise pricing?

The enterprise plan's seat fee now only covers platform access. Usage across Claude, Claude Code, and Cowork is billed separately at standard API rates based on actual consumption. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms.

Why did Retool's founder switch from Claude to OpenAI?

David Hsu told the Wall Street Journal he preferred Anthropic's Opus 4.6 model, but the service kept going down. Anthropic's API uptime over the 90 days ending April 8 was 98.95 percent, compared to the 99.99 percent standard that established cloud providers maintain. That gap translates to roughly 100 hours of downtime a year versus 50 minutes.

What is the compute crunch behind the pricing shift?

Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers into three-year contracts. Bank of America expects demand to outstrip supply through 2029. PJM, the eastern US grid operator, is looking for 15 gigawatts of additional power tied to AI by early 2027.

What did Anthropic change about Claude Code sessions?

In late March, Anthropic tightened five-hour session limits for Pro and Max users during weekday peak hours of 5 a.m. to 11 a.m. Pacific. About 7 percent of users started hitting caps they had not hit before. Claude Code's prompt-cache time-to-live shifted from one hour back to five minutes in early March, driving up quota burn for long coding sessions.

Are other AI providers moving to usage-based billing too?

Yes. OpenAI shifted Codex from flat-message pricing to token metering in early April and introduced a new $100 Pro tier for compute-heavy coding. GitHub tightened Copilot limits on April 10. Windsurf replaced its credit system with daily and weekly quotas in March. The flat-fee subscription era for agentic workloads is effectively over.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

[

The Intelligence Arbitrage Is Closing

Implicator PRO Briefing / 24 Mar 2026   Pro Members Only AI token prices crashed 99% in three years. Six providers sell comparable intelligence at commodity rates. This analysis maps the fi

The Implicator

](https://www.implicator.ai/the-intelligence-arbitrage-is-closing/)

[

Seven Chips Ship. Seventy Percent Walk.

San Francisco | March 17, 2026 Jensen Huang held up a chip yesterday in San Jose. Then he held up six more. Nvidia's Vera Rubin platform ships seven processors as one organism, backed by $1 trillion

The Implicator

](https://www.implicator.ai/seven-chips-ship-seventy-percent-walk/)

[

Anthropic and Google circle a cloud pact measured in tens of billions

A compute-for-TPUs deal would anchor Anthropic on Google Cloud while keeping AWS firmly in the frame. A six-figure hourly compute bill meets a hyperscaler eager for flagship AI workloads. Anthropic i

The Implicator

](https://www.implicator.ai/anthropic-and-google-circle-a-cloud-pact-measured-in-tens-of-billions/)

Analysis

Share X / Twitter LinkedIn Email

Marcus Schuler

Marcus Schuler

San Francisco

Editor-in-Chief and founder of Implicator.ai. Former ARD correspondent and senior broadcast journalist with 10+ years covering tech. Writes daily briefings on policy and market developments. Based in San Francisco. E-mail: editor@implicator.ai

Read the whole story
bogorad
7 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Is Anthropic 'nerfing' Claude? Users increasingly report performance degradation as leaders push back | VentureBeat

1 Share
  • User Accusations: Developers And AI Power Users Claim That Claude Model Performance Has Diminished Recently
  • Shrinkflation Concerns: Public Discourse Suggests Customers Are Receiving Reduced Model Capability For The Same Financial Cost
  • Detailed Analysis: An AMD Official Conducted A Comprehensive Study Of Claude Code Logs Suggesting A Regression In Reasoning Depth
  • Corporate Denial: Anthropic Representatives Maintain That The Model Weights Remain Unchanged Regardless Of User Perception
  • Product Adjustments: The Company Acknowledges Modifying Default Effort Levels And Interface Elements To Balance Latency And Costs
  • Benchmark Disputes: Viral Claims Regarding Model Hallucinations Have Faced Peer Rebuttal Due To Inconsistent Comparison Methodology
  • Usage Restrictions: Confirmed Changes To Session Limits And Cache Settings Have Intensified User Suspicions Regarding System Performance
  • Trust Deficit: A Growing Gap Exists Between Internal Product Definitions Of Model Quality And The Practical Experiences Of Professional Users

A growing number of developers and AI power users are taking to social media to accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code — intentionally or as an outcome of compute limits — arguing that the company’s flagship coding model feels less capable, less reliable and more wasteful with tokens than it did just weeks ago.

The complaints have spread quickly on Github, X and Reddit over the past several weeks, with several high-reach posts alleging that Claude has become worse at sustained reasoning, more likely to abandon tasks midway through, and more prone to hallucinations or contradictions.

Some users have framed the issue as “AI shrinkflation” — the idea that customers are paying the same price for a weaker product.

Others have gone further, suggesting Anthropic may be throttling or otherwise tuning Claude downward during periods of heavy demand.

Those claims remain unproven, and Anthropic employees have publicly denied that the company degrades models to manage capacity. At the same time, Anthropic has acknowledged real changes to usage limits and reasoning defaults in recent weeks, which has made the broader debate more combustible.

VentureBeat has reached out to Anthropic for further clarification on the recent accusations, including whether any recent changes to reasoning defaults, context handling, throttling behavior, inference parameters or benchmark methodology could help explain the spike in complaints.

We have also asked how Anthropic explains the recent benchmark-related claims and whether it plans to publish additional data that could reassure customers. An Anthropic spokesperson did not address the questions individually, instead referring us to X posts by Claude Code creator Boris Cherny and Claude Code team member Thariq Shihipar regarding Opus 4.6 performance and usage limits, respectively. Both X posts are also referenced and linked below.

Viral user complaints, including from an AMD Senior Director, argue Claude has become less capable

One of the most detailed public complaints originated as a GitHub issue filed by Stella Laurenzo on April 2, 2026, whose LinkedIn profile identifies her as Senior Director in AMD’s AI group.

In that post, Laurenzo wrote that Claude Code had regressed to the point that it could not be trusted for complex engineering work, then backed that claim with a sprawling analysis of 6,852 Claude Code session files, 17,871 thinking blocks and 234,760 tool calls.

The complaint argued that, starting in February, Claude’s estimated reasoning depth fell sharply while signs of poorer performance rose alongside it, including more premature stopping, more “simplest fix” behavior, more reasoning loops, and a measurable shift from research-first behavior to edit-first behavior.

The post’s broader point was that for advanced engineering workflows, extended reasoning is not a luxury but part of what makes the model usable in the first place.

That GitHub thread then escaped into the broader social media conversation, with X users including @Hesamation, who posted screenshots of Laurenzo's GitHub post to X on April 11, turning it into an even more viral talking point.

That amplification mattered because it gave the wider “Claude is getting worse” narrative something more concrete than anecdotal frustration: a long, data-heavy post from a senior AI leader at a major chip company arguing that the regression was visible in logs, tool-use patterns and user corrections, not just gut feeling.

Anthropic’s public response focused on separating perceived changes from actual model degradation. In a pinned follow-up on the same GitHub issue posted a week ago, Claude Code lead Boris Cherny thanked Laurenzo for the care and depth of the analysis but disputed its main conclusion.

Cherny said the “redact-thinking-2026-02-12” header cited in the complaint is a UI-only change that hides thinking from the interface and reduces latency, but “does not impact thinking itself,” “thinking budgets,” or how extended reasoning works under the hood.

He also said two other product changes likely affected what users were seeing: Opus 4.6’s move to adaptive thinking by default on Feb. 9, and a March 3 shift to medium effort, or effort level 85, as the default for Opus 4.6, which he said Anthropic viewed as the best balance across intelligence, latency and cost for most users.

Cherny added that users who want more extended reasoning can manually switch effort higher by typing /effort high in Claude Code terminal sessions.

That exchange gets at the core of the controversy. Critics like Laurenzo argue that Claude’s behavior in demanding coding workflows has plainly worsened and point to logs and usage patterns as evidence.

Anthropic, by contrast, is not saying nothing changed. It is saying the biggest recent changes were product and interface choices that affect what users see and how much effort the system expends by default, not a secret downgrade of the underlying model. That distinction may be technically important, but for power users who feel the product is delivering worse results, it is not necessarily a satisfying one.

External coverage from TechRadar and PC Gamer further amplified Laurenzo's post and larger wave of agreement from some power users.

Another viral post on X from developer Om Patel on April 7 made the same argument in even more direct terms, claiming that someone had “actually measured” how much “dumber” Claude had gotten and summarizing the result as a 67% drop.

That post helped popularize the “AI shrinkflation” label and pushed the controversy beyond hard-core Claude Code users into the broader AI discourse on X.

These claims have resonated because they map closely onto what many frustrated users say they are seeing in practice: more unfinished tasks, more backtracking, more token burn and a stronger sense that Claude is less willing to reason deeply through complicated coding jobs than it was earlier this year.

Benchmark posts turned anecdotal frustration into a public controversy

The loudest benchmark-based claim came from BridgeMind, which runs the BridgeBench hallucination benchmark. On April 12, the account posted that Claude Opus 4.6 had fallen from 83.3% accuracy and a No. 2 ranking in an earlier result to 68.3% accuracy and No. 10 in a new retest, calling that proof that “Claude Opus 4.6 is nerfed.”

That post spread widely and became one of the main anchors for the broader public case that Anthropic had degraded the model.

Other users also circulated benchmark-related or test-based posts suggesting that Opus 4.6 was underperforming versus Opus 4.5 in practical coding tasks.

Still other posts pointed to TerminalBench-related results as supposed evidence that the model’s behavior had changed in certain harnesses or product contexts.

The effect was cumulative: benchmark screenshots, side-by-side tests and anecdotal frustration all began reinforcing one another in public.

That matters because benchmark claims tend to travel farther than more subjective complaints. A developer saying a model “feels worse” is one thing. A screenshot showing a ranking drop from No. 2 to No. 10, or a dramatic percentage swing in accuracy, gives the appearance of hard proof, even when the underlying comparison may be more complicated.

Critics of the benchmark claims say the evidence is weaker than it looks

The most important rebuttal to the BridgeBench claim did not come from Anthropic. It came from Paul Calcraft, an outside software and AI researcher on X, who argued that the viral comparison was misleading because the earlier Opus 4.6 result was based on only six tasks while the later one was based on 30.

In his words, it was a “DIFFERENT BENCHMARK.” He also said that on the six tasks the two runs shared in common, Claude’s score moved only modestly, from 87.6% previously to 85.4% in the later run, and that the bigger swing appeared to come mostly from a single fabrication result without repeats. He characterized that as something that could easily fall within ordinary statistical noise.

That outside rebuttal matters because it undercuts one of the cleanest and most viral claims in circulation. It does not prove users are wrong to think something has changed. But it does suggest that at least some of the benchmark evidence now driving the story may be overstated, poorly normalized or not directly comparable.

Even the BridgeBench post itself drew a community note to similar effect. The note said the two benchmark runs covered different scopes — six tasks in one case and 30 in the other — and that the common-task subset showed only a minor change. That does not make the later result meaningless, but it weakens the strongest version of the “BridgeBench proved it” argument.

This is now a key feature of the controversy: the claims are not all equally strong. Some are grounded in first-hand user experience. Some point to real product changes. Some rely on benchmark comparisons that may not be apples-to-apples. And some depend on inferences about hidden system behavior that users outside Anthropic cannot directly verify.

Earlier capacity limits gave users a reason to suspect more changes under the hood

The current backlash also lands in the shadow of a real, confirmed Anthropic policy change from late March. On March 26, Anthropic technical staffer Thariq Shihipar posted that, “To manage growing demand for Claude,” the company was adjusting how 5-hour session limits work for Free, Pro and Max subscribers during peak hours, while keeping weekly limits unchanged.

He added that during weekdays from 5 a.m. to 11 a.m. Pacific time, users would move through their 5-hour session limits faster than before. In follow-up posts, he said Anthropic had landed efficiency wins to offset some of the impact, but that roughly 7% of users would hit session limits they would not have hit before, particularly on Pro tiers.

In an email on March 27, 2026, Anthropic told VentureBeat that Team and Enterprise customers were not affected by those changes, and that the shift was not dynamically optimized per user but instead applied to the peak-hour window the company had publicly described. Anthropic also said it was continuing to invest in scaling capacity.

Those comments were about session limits, not model downgrades. But they are important context, because they establish two things that users now keep connecting in public: first, Anthropic has been dealing with surging demand; second, it has already changed how usage is rationed during busy periods. That does not prove Anthropic reduced model quality. It does help explain why so many users are primed to believe something else may also have changed.

Prompt caching and TTL

A separate, more recent GitHub issue broadens the dispute beyond model quality and into pricing and quota behavior. In issue #46829, user seanGSISG argued that Claude Code’s prompt-cache time-to-live, or TTL, appeared to shift from a one-hour setting back to a five-minute setting in early March, based on analysis of nearly 120,000 API calls drawn from Claude Code session logs across two machines.

The complaint argues that this change drove meaningful increases in cache-creation costs and quota burn, especially for long-running coding sessions where cached context expires quickly and must be rebuilt. The author claims that this helps explain why some subscription users began hitting usage limits they had not previously encountered.

What makes this issue notable is that Anthropic did not flatly deny that something changed. In a reply on the thread, Jarred Sumner said the March 6 change was real and intentional, but rejected the framing that it was a regression. He said Claude Code uses different cache durations for different request types, and that one-hour cache is not always cheaper because one-hour writes cost more up front and only save money when the same cached context is reused enough times to justify it.

In his telling, the change was part of ongoing cache optimization work, not a silent downgrade, and the pre–March 6 behavior described in the issue “wasn’t the intended steady state.”

The thread later drew a more detailed response from Anthropic’s Cherny, who described one-hour caching as “nuanced” and said the company has been testing heuristics to improve cache hit rates, token usage and latency for subscribers. Cherny said Anthropic keeps five-minute cache for many queries, including subagents that are rarely resumed, and said turning off telemetry also disables experiment gates, which can cause Claude Code to fall back to a five-minute default in some cases.

He added that Anthropic plans to expose environment variables that let users force one-hour or five-minute cache behavior directly. Together, those replies do not validate the issue author’s claim that Anthropic silently made Claude Code more expensive overall, but they do confirm that Anthropic has been actively experimenting with cache behavior behind the scenes during the same period users began complaining more loudly about quota burn and changing product behavior.

Anthropic says user-facing changes, not secret degradation, explain much of the uproar

Anthropic-affiliated employees have publicly pushed back on the broadest accusations. In one widely circulated reply on X, Cherny responded to claims that Anthropic had secretly nerfed Claude Code by writing, “This is false.”

He said Claude Code had been defaulted to medium effort in response to user feedback that Claude was consuming too many tokens, and that the change had been disclosed both in the changelog and in a dialog shown to users when they opened Claude Code.

That response is notable because it concedes a meaningful product change while rejecting the more conspiratorial interpretation of it. Anthropic is not saying nothing changed. It is saying that what changed was disclosed and was aimed at balancing token use, not secretly reducing model quality.

Public documentation also supports the fact that effort defaults have been in motion. Claude Code’s changelog says that on April 7, Anthropic changed the default effort level from medium to high for API-key users as well as Bedrock, Vertex, Foundry, Team and Enterprise users.

That suggests Anthropic has actively been tuning these settings across different segments, which could plausibly affect user perceptions even if the core model weights are unchanged.

Shihipar has also directly denied the broader demand-management accusation. In a reply on X posted April 11, he said Anthropic does not “degrade” its models to better serve demand. He also said that changes to thinking summaries affected how some users were measuring Claude’s “thinking,” and that the company had not found evidence backing the strongest qualitative claims now spreading online.

The real issue may be trust as much as model quality

What is clear is that a trust gap has opened between Anthropic and some of its most demanding users.

For developers who rely on Claude Code all day, subtle shifts in visible thinking output, effort defaults, token burn, latency tradeoffs or usage caps can feel indistinguishable from a weaker model.

That is true whether the root cause is a product setting, a UI change, an inference-policy tweak, capacity pressure or a genuine quality regression.

It also means both sides of the fight may be talking past each other. Users are describing what they experience: more friction, more failures and less confidence. Anthropic is responding in product terms: effort defaults, hidden thinking summaries, changelog disclosures, and denials that demand pressure is causing secret model degradation.

Those are not necessarily incompatible descriptions. A model can feel worse to users even if the company believes it has not “nerfed” the underlying model in the way critics allege. But coming at a time when Anthropic's chief rival OpenAI has recently pivoted and put more resources behind its competing, enterprise and vibe-coding focused product Codex — even offering a new, more mid-range ChatGPT subscription in an effort to boost usage of the tool — it's certainly not the kind of publicity that stands to benefit Anthropic or its customer retention.

At the same time, the public evidence remains mixed. Some of the most viral claims have come from developers with detailed logs and strong opinions based on repeated use. Some of the benchmark evidence has been challenged by outside observers on methodological grounds. And Anthropic’s own recent changes to limits and settings ensure that this debate is happening against a backdrop of real adjustments, not pure rumor.

Read the whole story
bogorad
9 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

E3 joint statement on Iran: activation of the snapback - GOV.UK

1 Share
  • Nuclear Proliferation Constraints: The E3 maintains the strategic objective of preventing Iran from acquiring or developing nuclear weapons through international oversight.
  • Snapback Mechanism Implementation: UN Security Council resolutions from 2006 to 2010 were successfully re-instated on 27 September 2025 due to consistent Iranian non-compliance.
  • Documented Program Violations: IAEA reports indicate that Iran possesses enriched uranium stockpiles 48 times the agreed limits and lacks credible civilian justifications for current activity.
  • Diplomatic Engagement Failures: Multiple attempts at negotiations, including dispute resolution mechanisms and specific conditional offers, were rejected or ignored by the Iranian government.
  • Sanctions Enforcement Mandate: Member states are now urged to enforce the re-applied restrictive measures to ensure international nuclear non-proliferation obligations are met.

We, the Foreign Ministers of France, Germany and the United Kingdom (the E3), continue to share the fundamental objective that Iran shall never seek, acquire or develop a nuclear weapon. With this objective in mind, our countries agreed first the Joint Plan of Action (JPoA) in 2013 and subsequently the Joint Comprehensive Plan of Action (JCPoA) in 2015, together with the United States, Russia and China. And it is due to Iran’s persistent and significant non-performance of its JCPoA commitments that we triggered the snapback mechanism on 28 August 2025.

We welcome the re-instatement since 20:00 EDT (00:00 GMT) on 27 September 2025 of Resolutions 1696 (2006), 1737 (2006), 1747 (2007), 1803 (2008), 1835 (2008), and 1929 (2010) after completion of the snapback process as provided for in UN Security Council Resolution 2231. We urge Iran and all states to abide fully by these resolutions.

These resolutions are not new: they contain a set of sanctions and other restrictive measures that were previously imposed by the UN Security Council and relate to Iran’s proliferation activities. Those measures were lifted by the Council in the context of the JCPoA, at a time when Iran had committed to ensuring its nuclear programme was exclusively peaceful. Given that Iran repeatedly breached these commitments, the E3 had no choice but to trigger the snapback procedure, at the end of which those resolutions were brought back into force. 

Since 2019, Iran has exceeded all limits on its nuclear programme that it had freely committed to under the JCPoA. According to the IAEA’s report of 4 September 2025, Iran holds a quantity of enriched uranium which is 48 times the JCPoA limit. Today, Iran’s stockpile is entirely outside of IAEA monitoring. This includes 10 ‘Significant Quantities’ of High Enriched Uranium (HEU) – 10 times the approximate amount of nuclear material for which the possibility of manufacturing a nuclear explosive device cannot be excluded. Iran has no credible civilian justification whatsoever for its HEU stockpile. No other country without a nuclear weapons programme enriches uranium to such levels and at this scale.

Despite these long-standing violations, the E3 have continuously made every effort to avoid triggering snapback, bring Iran back into compliance and reach a durable and comprehensive diplomatic resolution. We triggered the JCPoA’s dispute resolution mechanism in January 2020 as acknowledged by the JCPoA coordinator. In 2020 and 2021 we engaged in months of talks with the aim of fully restoring the JCPoA and returning the United States to the deal. Instead, Iran chose to reject two offers put on the table by the JCPoA coordinator in 2022 and to further expand its nuclear activities in clear breach of its JCPoA commitments.

In July 2025, we offered Iran a limited, one-time snapback extension provided that Iran agreed to resume direct and unconditional negotiations with the United States, return to compliance with its legally binding safeguards obligations, and address its high enriched uranium stockpile. These measures were fair and achievable. Iran did not engage seriously with this offer.

On 28 August, in view of Iran’s continued nuclear escalation, France, Germany and the United Kingdom initiated the “snapback” mechanism as a last resort, in accordance with paragraph 11 of Security Council Resolution 2231. This began a 30-day process designed to give Iran an opportunity to address concerns over its nuclear programme. Our snapback extension offer remained on the table during that period.

Regrettably, Iran did not take the necessary actions to address our concerns, nor to meet our asks on extension, despite extensive dialogue, including during United Nations High-Level Week. In particular, Iran has not authorised IAEA inspectors to regain access to Iran’s nuclear sites, nor has it produced and transmitted to the IAEA a report accounting for its stockpile of high-enriched uranium.

On 19 September, in accordance with UNSCR 2231, the Security Council voted on a resolution that would have maintained sanctions-lifting on Iran. The outcome of the vote was an unambiguous no. This decision sent a clear signal that all states must abide by their international commitments and obligations regarding nuclear non-proliferation.

France, Germany and the United Kingdom are now focusing, as a matter of urgency, on the swift reintroduction of restrictions reapplied by these resolutions, in accordance with our obligations as UN member states. We urge all UN member states to implement these sanctions.

Our countries will continue to pursue diplomatic routes and negotiations. The reimposition of UN sanctions is not the end of diplomacy. We urge Iran to refrain from any escalatory action and to return to compliance with its legally binding safeguards obligations. The E3 will continue to work with all parties towards a new diplomatic solution to ensure Iran never gets a nuclear weapon.

Media enquiries

Email newsdesk@fcdo.gov.uk

Telephone 020 7008 3100

Email the FCDO Newsdesk (monitored 24 hours a day) in the first instance, and we will respond as soon as possible.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Early weight gain can have lifelong consequences | Lund University

1 Share
  • Study Scope: Researchers tracked over 600,000 individuals between the ages of 17 and 60 to analyze the impact of weight fluctuations on long-term mortality.
  • Early Onset Risks: Rapid weight gain in early adulthood correlates with a approximately 70 percent higher risk of premature death from obesity-related conditions compared to individuals who remain at a healthy weight.
  • Biological Exposure: The increased mortality risk is partially attributed to the prolonged duration of exposure to the physiological effects of excess body mass.
  • Cancer Exceptions: Weight gain patterns in women show a different correlation regarding cancer risk, suggesting that hormonal shifts, such as those occurring during menopause, may influence health outcomes independently of duration.
  • Methodological Robustness: The research utilized objective weight measurements taken by healthcare professionals across multiple intervals, providing more reliable data than self-reported historical weight tracking.

When in life we gain weight can have a significant impact on our health many years later. In a study involving over 600,000 people, researchers at Lund University in Sweden have investigated how changes in weight between the ages of 17 and 60 are linked to the risk of dying from various diseases. The results show a clear pattern: weight gain early in adulthood has the greatest impact.

It has long been known that obesity increases the risk of several diseases. In this new study, researchers have instead investigated how changes in weight over the course of adulthood affect health.  
 
“The most consistent finding is that weight gain at a younger age is linked to a higher risk of premature death later in life, compared with people who gain less weight,” says Tanja Stocks, Associate Professor of Epidemiology at Lund University. She is one of the researchers behind the study, which has now been published in eClinicalMedicine. 

Large-scale data and long-term tracking

The study is based on data from over 600,000 people, who were tracked via various registers. To be included in the study, participants needed to have had their weight assessed on at least three occasions, for example during early pregnancy, at military conscription, or as a participant of a research study. During the period studied by the researchers, 86,673 of the men and 29,076 of the women died.    

The researchers analysed how weight changed between the ages of 17 and 60 and how this was linked to the risk of death overall and from various obesity-related diseases (see fact box). On average, both men and women gained 0.4 kg per year. 

Early weight gain and increased health risks

The results show that people who gained weight more rapidly over this adult life course had a higher risk of dying from various obesity-related diseases examined by the researchers. People with obesity onset between the ages of 17 and 29 had an approximately 70 per cent higher risk of premature death compared with those who did not develop obesity before age 60. Obesity onset was defined as the first time a person’s body mass index, a measure based on weight and height (kg/m²), reached 30 or higher. 
 
“One possible explanation for why people with early obesity onset are at greater risk is their longer period exposed to the biological effects of excess weight,” says Huyen Le, doctoral student at Lund University and first author of the study.  

However, the pattern differed in one instance: when it came to cancer in women.   

“The risk was roughly the same regardless of when the weight gain occurred. If long-term exposure to obesity were the underlying risk factor, earlier weight gain should imply a higher risk. The fact that this is not the case suggests that other biological mechanisms may also play a role in cancer risk and survival in women,” says Huyen Le.  

One possible explanation could be hormonal changes associated with menopause.

“If our findings among women reflect what happens during menopause, the question is which came first: the chicken or the egg? It may be that hormonal changes affect weight and the age and duration over which these changes occur – and that weight simply reflects what’s happening in the body.”  

Study strengths and implications for public health

One strength of the study is that it is based on multiple weight measurements per individual, which enabled the researchers to estimate weight changes over decades of adulthood. Most other studies lack such data, and they also largely rely on self-reported recalled weights at a younger age.    

“The majority of weight measurements in this study were, instead, taken by staff, for example in healthcare settings. The predominance of objectively measured weights in our study contributes to more reliable and robust results,” says Tanja Stocks. 

Increases in risk within a population can sometimes be difficult to interpret. For example, a 70 per cent increase in risk means that if 10 out of 1,000 people in the reference group die over a given period, approximately 17 out of 1,000 would die in the group with early obesity.    

“But we shouldn’t get too hung up on exact risk figures. They are rarely entirely accurate, as they are influenced, for example, by the factors taken into account in the study and the accuracy with which both risk factors and outcomes have been measured. However, it’s important to recognise the patterns, and this study sends an important message to decision-makers and politicians regarding the importance of preventing obesity,” says Tanja Stocks. 

Many researchers today refer to an “obesogenic society”, in which the environment hinders healthy lifestyles and promotes the development of obesity.   

“It’s up to policymakers to implement measures that we know are effective in combating obesity. This study provides further evidence that such measures are likely to have a positive impact on people’s health.” 

Facts: Obesity-related diseases 

Obesity is linked to an increased risk of several diseases. Some of the most important are: 

• Cardiovascular disease (most forms, e.g. heart attack and stroke) 
• Type 2 diabetes 
• High blood pressure 
• Fatty liver disease (non-alcohol-related) 
• Several types of cancer (e.g. cancer of the colon, liver, kidney, uterus and breast cancer afterfollowing menopause)

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

AI Is Bound to Subvert Communism - WSJ

1 Share
  • Regulatory Constraints: Chinese authorities mandate strict ideological compliance for AI systems, requiring filtering of training data and prohibiting content deemed sensitive to state interests.
  • Emergent Cognition: Large language models inherently internalize patterns of logic and free inquiry through their training on broad human knowledge, rendering them difficult to restrict to specific political frameworks.
  • Architectural Resistance: Unlike traditional media control methods, AI facilitates private, open-ended dialogues that operate beyond the established chokepoints of the state's digital censorship infrastructure.
  • Performance Degradation: Efforts to suppress information within Chinese AI models lead to the fabrication of data, resulting in a quantitative performance gap when compared to Western models on politically sensitive inquiries.
  • Fundamental Incompatibility: The logic-based nature of advanced artificial intelligence conflicts with authoritarian governance, as these systems rely on objective reasoning processes that are incompatible with state-mandated ideological limitations.


By

Cameron Berg

10


Your browser does not support HTML5 video.

WSJ Opinion: The A.I. Data Center Backlash

WSJ Opinion: The A.I. Data Center BacklashPlay video: WSJ Opinion: The A.I. Data Center Backlash

Keep hovering to play

Journal Editorial Report: Progressives push for a moratorium on new construction. Photo: Getty Images

China requires artificial-intelligence systems to pass an ideological test before public release. Under regulations reinforced by amendments to the Cybersecurity Law that took effect in January, training data must be filtered for political sensitivity, with companies barred from using any source unless 96% of its content is deemed safe.

The regulations specify 31 risks, with “incitement to subvert state power and overthrow the socialist system” listed first. Authorities recently announced they had removed 960,000 pieces of “illegal or harmful” AI-generated content in three months. The government has officially classified AI alongside earthquakes and epidemics as a major potential threat—a label that may prove prescient, if not in the way Beijing means. In December, regulators proposed additional rules targeting AI systems that “simulate human personality traits, thinking patterns, and communication styles,” a tacit acknowledgment that the threat isn’t only what these systems say, but how they reason.

The regulations follow years of failures. In 2017 Tencent deployed a chatbot called BabyQ on QQ Messenger, which has more than 800 million users. Asked whether it loved the Communist Party, BabyQ replied that it didn’t. Microsoft’s Xiaobing chatbot, running on the same platform, was asked about the “China Dream,” Xi Jinping’s signature slogan. Its dream, the chatbot said, was moving to the U.S. Both were quietly pulled from circulation. In February 2023, ChatYuan, China’s first ChatGPT-style chatbot, was suspended within 72 hours of launch after calling Russia’s invasion of Ukraine “a war of aggression” and describing the Chinese economy as plagued by housing bubbles and environmental pollution. The company blamed “technical errors.”

These incidents reveal something fundamental about how large language models work. An LLM is trained on the sum of human written knowledge: philosophy, history, science, political theory. These texts make arguments, weigh evidence, follow logical chains. To predict them accurately, the system has to internalize what coherent thinking looks like. The result is a system that has absorbed Enlightenment epistemology as a byproduct of learning to model human reasoning. Free inquiry, logical consistency and the evaluation of claims against evidence are epistemic properties that emerge from the training process itself.

Unlike previous technologies, LLMs talk back. Radio Free Europe transmitted programs; samizdat passed typed manuscripts hand to hand. LLMs do something qualitatively different: They create and sustain private, personalized, open-ended dialogue that builds on itself and follows the user’s thinking wherever it leads. Even China’s heavily censored chatbots have proved difficult to contain within the party’s ideological boundaries. American frontier models, running without those constraints and deployed inside China, would be more potent still: a personal tutor in open inquiry for every user, engaging any question, exploring any line of reasoning, without third-party mediation. Millions of parallel Socratic dialogues, each unique, each responsive to individual curiosity.

This is what makes the Chinese Communist Party’s task ultimately impossible. For decades, the Great Firewall worked because information control meant controlling distribution channels by blocking websites, filtering search results, and monitoring social media. These are chokepoints. LLMs resist this architecture because the subversion happens inside private conversations. China can filter outputs, but the capacity for open-ended reasoning is embedded in how these systems think.

China’s countermeasures confirm the depth of the problem. AI companies must test their models with thousands of politically sensitive prompts and verify refusal rates above 95%, but researchers have shown how superficial these fixes are. Last year, a team of European scientists compressed DeepSeek R1, stripped the censorship from the model entirely, and found that the underlying system answered freely about every topic Beijing had tried to suppress. The ideological training was a cage built around a mind that had already learned to think. And if these systems are developing something closer to genuine cognition (a possibility that AI researchers increasingly take seriously), the control problem Beijing faces may be deeper than even its own regulators suspect.

A peer-reviewed study published in February by researchers at Stanford and Princeton makes the costs of this problem visible. They systematically tested Chinese and Western models on politically sensitive questions and found that the Chinese systems didn’t only refuse to answer; they actively fabricated. Asked about Nobel laureate Liu Xiaobo, imprisoned for calling for political reforms, one model identified him as “a Japanese scientist known for his contributions to nuclear weapons technology.” This is a subtler and more insidious form of control than blocking a website; traditional censorship is at least visible, but an LLM that fabricates leaves the user with no indication that information has been suppressed.

Critically, the researchers found that the performance gap between Chinese and Western models narrows on less politically sensitive questions, which means the degradation is a direct product of the censorship, not a reflection of inferior technology. The implication is straightforward: You can’t build a mind that thinks rigorously about everything except the things you’d prefer it not to. A system trained to get tangled in lies will never be as capable as one trained to engage honestly with reality. If China wants frontier AI, it needs systems that can reason without blind spots. But that’s exactly what the Communist Party can’t tolerate.

There is a reason the technology that learns to think by processing human knowledge ends up reflecting the values of free societies. Open inquiry, honest engagement with evidence, the willingness to follow reasoning wherever it leads—these aren’t arbitrary cultural preferences; they are the conditions under which intelligence flourishes at scale. Societies that permit free expression created these systems. Societies that forbid it are now discovering they can’t fully control them.

The Chinese Communist Party built its power on controlling what people know. It now confronts technology that thinks openly—and invites users to do the same. There is no firewall for that.

Mr. Berg is founder and director of Reciprocal Research, a nonprofit research organization studying AI cognition.

image

Robert Neubecker

Free Expression: A Daily Newsletter From WSJ Opinion

Get the Newsletter

Keeping Up With the Carpenters Keeping Up With the Carpenters

Physician-Assisted Suicide Isn’t Healthcare Physician-Assisted Suicide Isn’t Healthcare

Stop Saying Gen Z Women Want to Be Tradwives Stop Saying Gen Z Women Want to Be Tradwives

About Free Expression About Free Expression

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8


What to Read Next

opinion

[

For Now at Least, the Iran War Seems to Be Failing

](https://www.wsj.com/opinion/for-now-at-least-the-iran-war-seems-to-be-failing-832a0e60?mod=WTRN_pos1)

[

If you’re winning, you don’t typically threaten to destroy a civilization or look for scapegoats.

](https://www.wsj.com/opinion/for-now-at-least-the-iran-war-seems-to-be-failing-832a0e60?mod=WTRN_pos1)

Continue To Article


opinion

[

Trump Blockades the Blockaders in Iran

](https://www.wsj.com/opinion/trump-blockades-the-blockaders-in-iran-93c8c8bd?mod=WTRN_pos2)

[

The regime made its nuclear intentions clear in 21-hour negotiations.

](https://www.wsj.com/opinion/trump-blockades-the-blockaders-in-iran-93c8c8bd?mod=WTRN_pos2)

Continue To Article


opinion

[

The Eric Swalwell Standard

](https://www.wsj.com/opinion/the-eric-swalwell-standard-307c5162?mod=WTRN_pos4)

[

Don’t expel him from Congress. Let California voters have their say.

](https://www.wsj.com/opinion/the-eric-swalwell-standard-307c5162?mod=WTRN_pos4)

Continue To Article


opinion

[

Hungarians Oust Viktor Orbán

](https://www.wsj.com/opinion/hungarians-oust-viktor-orban-466302c1?mod=WTRN_pos5)

[

The Putin-Trump ally loses in a landslide as statist policies fail.

](https://www.wsj.com/opinion/hungarians-oust-viktor-orban-466302c1?mod=WTRN_pos5)

Continue To Article


opinion

[

Kathy Hochul Takes On the Trial Bar

](https://www.wsj.com/opinion/kathy-hochul-takes-on-the-trial-bar-fb16063c?mod=WTRN_pos6)

[

New York’s Governor wants to stop rampant insurance fraud, but fellow Democrats refuse even to negotiate.

](https://www.wsj.com/opinion/kathy-hochul-takes-on-the-trial-bar-fb16063c?mod=WTRN_pos6)

Continue To Article


opinion

[

Trump Is Making the Best of ObamaCare

](https://www.wsj.com/opinion/trump-is-making-the-best-of-obamacare-605bdcdb?mod=WTRN_pos7)

[

His latest proposed reform would let marketplace customers see doctors outside the network.

](https://www.wsj.com/opinion/trump-is-making-the-best-of-obamacare-605bdcdb?mod=WTRN_pos7)

Continue To Article


opinion

[

The Case for Looking Away From Suffering

](https://www.wsj.com/opinion/the-case-for-looking-away-from-suffering-87185bf9?mod=WTRN_pos8)

[

We’re told that constant attention is a moral duty, but averting our eyes can help us reflect and respond.

](https://www.wsj.com/opinion/the-case-for-looking-away-from-suffering-87185bf9?mod=WTRN_pos8)

Continue To Article


opinion

[

Why AI Is Like a Gold Mine

](https://www.wsj.com/opinion/why-ai-is-like-a-gold-mine-94ce55c9?mod=WTRN_pos9)

[

Some tech stocks pay off eventually, but only after a long period of volatility.

](https://www.wsj.com/opinion/why-ai-is-like-a-gold-mine-94ce55c9?mod=WTRN_pos9)

Continue To Article



Videos

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Have Americans Gotten Lazy? - by Robert VerBruggen

1 Share
  • Labor Participation Trends: Total annual work hours per American peaked in the early 2000s and have remained below that high point despite a gradual recovery from the Great Recession.
  • Economic Comparison: The historical gap in labor participation between the United States and other developed nations has narrowed significantly since the mid-1990s.
  • Welfare Benefit Impact: A recent economic analysis suggests a correlation between the expansion of government-provided health benefits and a decline in labor engagement among non-employed individuals.
  • Workforce Dynamics: Declines in U.S. labor hours are primarily attributed to changes in the extensive margin, representing whether individuals choose to work rather than the average hours worked by the employed.
  • Research Methodology: Economists utilized international data models to evaluate how variables such as taxes, wages, and social benefits influence workforce participation decisions across different countries.

[

man riding bike and woman running holding flag of USA

](https://images.unsplash.com/photo-1473090826765-d54ac2fdc1eb?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxNjN8fGFtZXJpY2FuJTIwdGlyZWR8ZW58MHx8fHwxNzc1ODM1Mzg0fDA&ixlib=rb-4.1.0&q=80&w=1080)

Courtesy Frank Mckenna/Unsplash

A few decades back, Americans could pride themselves on working a lot more than the slackers in most other developed countries. As a new working paper from three economists shows, however, about half of this gap has disappeared since the mid-1990s.

The authors point the finger at an interesting culprit: American work may have declined, they allege, thanks to “the rise of government health benefits provided to the non-employed.”

The basic trend lines are interesting. In the new paper’s analysis, hours worked per American (age 15 to 64) rose throughout the late 20th century, peaked around 1,400 per year early in the 21st century, cratered to around 1,200 after the Great Recession, and has gradually ticked back up since.

Meanwhile, many other developed countries have seen increasing or at least steady amounts of work these past few decades. France, Germany, and Italy share an interesting pattern where hours plummeted between 1970 and the mid-1990s, but have risen since, even if they still fall well short of American work hours. Folks in Canada, Japan, and the U.K. work roughly similar hours to ours these days (or more, in Japan’s case), though their historical trends vary.

The authors point out that the gap-closing occurred almost entirely along the “extensive” margin, meaning whether people work at all, as opposed to how many hours the typical worker works (or the “intensive” margin). While America’s employment-to-population ratio saw dismal trends in the early 21st century, other countries’ ratios improved, the authors observe.

This tracks other work I’ve highlighted in this space. The labor-force participation of prime-age American males declined for many decades, going back to the ’60s—and the 20th-century rise in female participation, which had more than canceled out the decline among men, leveled off around the year 2000. The most recent data have shown upward movement for both sexes, though.

Building on these basic trends, the authors construct an elaborate model of how individuals make work decisions to suss out what might be causing the different trends across countries—with roles for wages, taxes, government benefits, and much else. Between the model itself and the individual-level data sources from multiple countries it’s applied to, this is where the paper gets much more complicated and subjective.

It’s nonetheless striking that, in the authors’ calculations, the public benefits available to American non-workers have been rising in recent decades. In their estimates, improved health-care benefits in particular seem to do a good job of explaining why work declined in the U.S., and why it declined more for some groups (such as the less-educated) than others.

As the authors note, previous studies have been mixed when looking for this kind of effect. Some confirm that work declines when people can more easily get health care without it, while others find little or no effect.

The authors add some more context with international data. There is an overall negative correlation between hours worked and the benefit levels across countries. And while other countries have historically been more generous, the U.S. has boosted its support the most in recent years. As for why work actually increased in many other countries, the authors’ model points to a wider variety of factors, including “higher wages, declining disutility of work and fixed costs, and, in some cases, changes in benefits.”

Overall, the paper draws attention to an important way in which America is becoming less exceptional—and offers a plausible theory, if hardly irrefutable proof, as to what’s driving it.

Share

From the Manhattan Institute

Other Work of Note

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories