Strategic Initiatives
12100 stories
·
45 followers

Russia-linked cryptocurrency services and sanctions evasion

1 Share
  • Sanctions Evasion: Cryptoasset Exchanges Provide Transaction Routes For Russian Entities To Conduct Cross Border Payments Outside Traditional Banking Oversight
  • Currency Conversion: Platforms Facilitate The Conversion Of Rubles Into Cryptoassets For Transfer To Overseas Brokers To Avoid Regulatory Intermediaries
  • Bitpapa Operations: This Peer To Peer Exchange Uses Wallet Rotation Strategies To Hide The Russian Origin Of Funds And Minimize Exposure Metrics
  • ABCeX Infrastructure: Operating From Moscow This Service Has Processed Eleven Billion Dollars While Utilizing Obfuscation Methods To Mask Counterparty Connections
  • Exmo Connectivity: Although Claiming Geographic Separation Analysis Shows Exmo Dot Com And Exmo Dot Me Share Identical Custodial Wallet Infrastructure And Pooled Funds
  • Rapira Trading: This Georgia Incorporated Exchange Facilitates Ruble Based Trading And Significant Transaction Volumes With Sanctioned Entities Through Its Moscow Office
  • Aifory Pro Services: Specialized Cash To Crypto Providers Offer Virtual Payment Cards To Bypass International Service Restrictions For Russian Users
  • High Risk Links: Multiple Exchanges Maintain Direct Financial Connections To Sanctioned Platforms Like Garantex And International Entities Such As Iranian Exchanges

Cryptoasset exchanges that maintain operational or financial connections with Russia continue to enable the circumvention of international sanctions. These platforms provide transaction routes that allow Russian entities to make cross-border payments shielded from traditional banking oversight.

Fiat currencies such as rubles can be converted into cryptoassets through these services, before being transferred across borders, without passing through any intermediaries. They can then be converted to local currency through overseas cryptoasset brokers or exchanges.

Despite growing regulatory pressure, many of these exchanges, some with nominal registrations outside Russia, still facilitate high volumes of cryptoasset trading linked to sanctioned entities. This article outlines several such services, many of which are themselves yet to be sanctioned.

Bitpapa

Bitpapa is a peer-to-peer (P2P) cryptoasset exchange with corporate registrations in the UAE. However, it primarily targets users in Russia, allowing rubles to be exchanged for a range of cryptoassets.

It was sanctioned by the U.S. Office of Foreign Assets Control (OFAC) in March 2024 for supporting Russian sanctions evasion. Evidence of its role in evasion includes its high exposure to other sanctioned entities: Approximately 9.7% of its outgoing crypto funds are destined for OFAC-sanctioned targets, including 5% specifically to the sanctioned exchange Garantex.

Furthermore, blockchain analysis suggests Bitpapa manages its wallets specifically to evade sanctions enforcement by constantly rotating addresses. This technique is designed to prevent transaction monitoring systems from identifying Bitpapa as a counterparty and to hide the Russian origin of the funds from services that subsequently receive them.

Bitpapa wallet rotation

Selected illicit and high-risk Russia-centric counterparties of Bitpapa.

ABCeX

ABCeX facilitates both order-book and P2P ruble-to-cryptoasset trading. It operates an office in Moscow’s Federation Tower, a location previously occupied by the sanctioned exchange Garantex. ABCeX utilises a wallet obfuscation strategy to prevent cryptoasset transactions from being linked to the service.

ABCeX has processed at least $11 billion in cryptoassets, with significant amounts being sent to the sanctioned Garantex and Aifory Pro (see below)ABCex transfers

Direct transaction flows between ABCX and sanctioned or high-risk Russia-centric entities.

Exmo (<a href="http://Exmo.com" rel="nofollow">Exmo.com</a> and Exmo.me)

Exmo provides a case where purported geographic operational separation is contradicted by on-chain data. Following the 2022 invasion of Ukraine, Exmo claimed to have exited the Russian market by selling its regional business to Exmo.me.

However, blockchain analysis demonstrates that <a href="http://Exmo.com" rel="nofollow">Exmo.com</a> and Exmo.me continue to share the same custodial wallet infrastructure. Cryptoassets deposited into either platform are pooled into identical hot wallet addresses, and withdrawals for both platforms are issued from the same addresses.

This indicates there is no real separation at an operational level, allowing funds from the Russian-facing platform to be co-mingled with the Western-facing entity. Exmo has conducted more than $19.5 million in direct transactions with sanctioned entities, including Garantex, Grinex and Chatex.

Exmo wallets

Direct interactions between Exmo and sanctioned or high-risk entities with a Russian nexus.

Rapira

Rapira is a Georgia-incorporated exchange with a Moscow office that facilitates ruble-based trading. It has engaged in direct cryptoasset transactions to and from the sanctioned exchange Grinex totaling more than $72 million. Rapira’s Moscow offices were reportedly raided as part of an investigation into suspected capital flight to Dubai. Rapira wallets

Direct interactions between Rapira and sanctioned or high-risk Russia-linked entities

Aifory Pro

Aifory Pro specializes in cash-to-cryptoasset services in Moscow, Dubai and Türkiye. It also serves as a "Foreign Economic Activity Payment Agent" for international trade, for example between Russia and China. And it explicitly facilitates the bypass of service restrictions by offering virtual payment cards and Apple Pay-enabled cards that utilize a customer's USDT balance to pay for foreign services like Airbnb and ChatGPT, otherwise blocked in Russia.

Further evidence of high-risk activity includes its direct financial links to Abantether, an Iranian exchange, to which it has sent nearly $2 million in cryptoassets.

Alfory Pro wallets

Direct cryptoasset transaction flows between Aifory Pro and sanctioned or high-risk Russia-linked entities 

Russia

Found this interesting? Share to your network.

Read the whole story
bogorad
4 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Google Restricts AI Ultra Subscribers Over OpenClaw OAuth, Days After Anthropic Ban

1 Share
  • Account Restrictions: Google Suspended AI Ultra Subscribers Paying 24999 Monthly For Using OpenClaw Third Party OAuth Access Without Providing Prior Warning Or Specific Explanations
  • Service Integration: Gemini Related Restrictions Threaten Access To Linked Services Including Gmail Workspace And Cloud Storage Due To Shared Account Infrastructure
  • Anthropic Policy: Anthropic Formally Banned Third Party OAuth Token Usage For Claude Accounts Citing Issues With Token Arbitrage Economics And Defensive Telemetry Requirements
  • Security Risks: OpenClaw Faces Over 21000 Exposed Instances Along With Supply Chain Attacks And Infostealers Targeted At Capturing Operational Context And Cryptographic Keys
  • OpenAI Strategy: OpenAI Hired The OpenClaw Creator And Endorsed Third Party Harness Usage For Codex Positioning Itself Against Competitive Restrictions From Other Providers
  • Economic Factors: Providers Claim Subscription Pricing Is Based On Human Usage Patterns And Automated High Throughput Workflows Create Sustainable Financial Gaps Compared To API Pricing
  • Developer Response: Users Are Migrating To Local Open Source Models Like Kimi K25 Or Qwen 35 And Private VPS Configurations To Avoid Provider Dependency And Sudden Loss Of Access
  • Lack Of Support: Impacted Google Customers Reported Receiving Only Automated Replies Or No Human Communication Regarding The Status Of Their Long Term Accounts And Paid Subscriptions

Google has restricted accounts of AI Ultra subscribers who accessed Gemini models through OpenClaw, a third-party OAuth client, according to a growing thread on the Google AI Developer Forum. The restrictions arrived without warning or explanation, cutting off users paying $249.99 per month from Gemini 2.5 Pro and, in some cases, threatening access to Gmail, Workspace, and other linked services. The enforcement comes two days after Anthropic updated its legal terms to explicitly ban OAuth token usage in third-party tools, including OpenClaw.

Two of the three largest AI model providers locked down third-party access in the same week. Same calculation. Same week.

Key Takeaways

  • Google restricted AI Ultra subscribers ($249.99/month) who used OpenClaw OAuth, with no warning or explanation
  • Anthropic banned third-party OAuth token usage two days earlier, citing token arbitrage economics
  • OpenClaw faces 21,639 exposed instances, infostealers targeting config files, and supply chain attacks
  • OpenAI hired OpenClaw's creator and endorsed third-party harness usage, widening the competitive gap

$249.99 a month buys you access, not flexibility

The forum thread that started this carries a title worth reading in full: "Account Restricted Without Warning, Google AI Ultra, OAuth via OpenClaw." The original poster described losing access to Gemini 2.5 Pro after connecting through OpenClaw, an open-source AI agent framework that routes existing subscriptions through alternative interfaces. No terms of service violation was cited. No explanation followed.

Other users arrived with matching stories. Some submitted appeals and received only automated replies. Others couldn't find a human representative at all. One user wrote that they tried creating a fresh Google account, only to discover that one restricted too. "No warnings, no nothing, just a ban after being a customer for decades," the user wrote, threatening to cancel YouTube, Google Ultra, and every other Google product. Another called the situation what it felt like from the inside: "I'd have to sue a trillion-dollar company just to get the measly fee I paid."

Nearly three thousand dollars a year, and the best Google could offer was a community manager promising to "share this with our internal teams." No timeline. No specifics. The thread kept growing.

And here is where Google's architecture turns an annoyance into something worse. Google's AI products sit on top of the broader Google account infrastructure. A restriction triggered by Gemini usage can cascade into Gmail, Workspace, and cloud storage. All tied to the same credentials. Several forum users flagged this exact concern. For a developer whose entire business runs on Google services, getting your AI subscription flagged doesn't just cut off one product. It puts everything at risk.

Anthropic wrote the rules first

Google's crackdown didn't arrive in a vacuum. Anthropic, two days earlier on February 20, had revised its Consumer Terms of Service to spell out what had been loosely implied since February 2024: OAuth tokens from Claude Free, Pro, and Max accounts are only permitted in Claude Code and Claude.ai. Using them anywhere else, including OpenClaw, violates the terms.

What makes the timing notable is that Anthropic's contractual language in Section 3.7 had already forbidden unauthorized third-party access since at least February 2024. For two years, that clause sat in the terms while tools like OpenClaw and OpenCode quietly let users supply Claude subscription keys anyway. Anthropic had tolerated the practice, or at least failed to enforce against it, until the economics forced their hand.

"Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service, including the Agent SDK, is not permitted and constitutes a violation of the Consumer Terms of Service," the updated compliance page states.

Anthropic engineer Thariq Shihipar provided the business reasoning in a January social media thread. Third-party harnesses create "unusual traffic patterns without any of the usual telemetry that the Claude Code harness provides," he wrote. That makes debugging impossible when users hit rate limits or account bans. "They don't have any other avenue for this support."

The Register's Thomas Claburn framed the economics bluntly. Anthropic sells subscription tokens at a discount compared to API pricing. An all-you-can-eat buffet priced with certain usage expectations. Third-party harnesses broke those expectations by letting subscribers extract more value than the subscription model assumed. Token arbitrage. Pay the flat rate, route tokens through a tool that burns through them faster than anyone planned for.

On Thursday, OpenCode pushed a commit removing support for Claude Pro and Max account keys. The commit message cited "anthropic legal requests." Policy language turned into enforced code in less than a week.

OpenAI, emboldened, leaned into the gap. Thibault Sottiaux from OpenAI pointedly endorsed using Codex subscriptions with third-party harnesses. Competitive positioning dressed up as developer friendliness.

OpenClaw sits at the center of all of it

OpenClaw keeps generating headlines, though not the kind its community wants. The open-source agent framework, formerly known as Clawdbot and then Moltbot, has amassed over 200,000 GitHub stars since launching in November 2025. Sam Altman announced on February 15 that OpenClaw founder Peter Steinberger would be joining OpenAI. OpenClaw would transition into a foundation structure with OpenAI backing.

But popularity and security have run in opposite directions. Censys identified 21,639 exposed OpenClaw instances sitting on the public internet as of January 31. SecurityScorecard's STRIKE team found hundreds of thousands more carrying potential remote code execution risks. Five CVEs were patched between January 25 and January 30, including a one-click RCE vulnerability that needed two attempts to fix properly before version 2026.1.30 closed the remaining gaps.

Infostealers have already adapted. Hudson Rock disclosed in mid-February that a Vidar variant had exfiltrated an OpenClaw user's full configuration, including gateway tokens, cryptographic keys, and the agent's soul.md file containing its operational instructions. Alon Gal, Hudson Rock's CTO, told The Hacker News the malware wasn't even targeting OpenClaw specifically. A broad file-grabbing routine "inadvertently struck gold by capturing the entire operational context of the user's AI assistant."

Then came the supply chain attack. ClawHavoc, discovered by Koi Security in late January, used professional-looking skills uploaded to ClawHub, OpenClaw's plugin marketplace. The skills instructed users to install a "helper agent" that actually deployed the Atomic Stealer infostealer. Full remote control over the victim's OpenClaw instance and every service it touched.

Alex Polyakov, founder of AI red teaming firm Adversa AI, built SecureClaw in response. An open-source audit tool running 55 hardening checks mapped to OWASP and MITRE frameworks. Polyakov doesn't oversell it. "We don't claim to 'solve' prompt injection. But we do make it significantly harder through multi-layer defense."

Stay ahead of the curve

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

The buffet is closing

Strip away the vendor names and the pattern is visible. Somewhere inside Google and Anthropic, finance teams stared at usage dashboards and saw the same problem. AI model providers had priced subscriptions assuming users would interact through first-party interfaces, at a human pace, with predictable usage curves. OpenClaw and tools like it shattered that assumption by letting subscribers route tokens through automated, high-throughput agent workflows. The math stopped working for providers who had been subsidizing access.

Google and Anthropic landed in the same place, but Anthropic at least told people what was happening. Updated the legal terms. Had an engineer post the reasoning on social media. Gave tool makers a few days to adjust. OpenCode removed Claude support by Thursday. Messy, but legible.

Google just started restricting accounts. No policy update appeared. No public statement went out. No explanation reached users paying $249.99 a month. Google's product engineering team confirmed to at least one forum user that their account "was suspended from using our Antigravity service," Google's developer-facing branding for its Gemini AI platform, but offered no details on what triggered the enforcement or how to avoid it in the future.

The developer forum thread, still growing, is the only public record of what happened. For a company that built OAuth and champions it across every other product line, from Gmail to YouTube to Google Drive, the silence on why OAuth access to Gemini triggers account restrictions is baffling.

What developers are actually facing

Short-term advice in developer communities has been blunt: stop using third-party OAuth clients with AI subscriptions immediately. Revert to native interfaces. Consider API key access, which carries per-token pricing but avoids the automated enforcement. Some developers have started migrating to local models entirely, running Kimi K2.5 or Qwen 3.5 on their own hardware to eliminate provider dependency altogether. One widely circulated Medium post detailed a migration from a $200-per-month Claude Max setup through OpenClaw to a $15-per-month dual-VPS configuration running open-source models.

The trust damage runs deeper than any single workaround can fix. Developers choosing an AI platform now have to weigh a new variable: the risk that a provider will retroactively restrict how you access what you've already paid for. Anthropic at least telegraphed the change. Google blindsided its own paying customers.

But the more consequential question sits behind all of these workarounds. If subscription pricing was always subsidized, and providers are now enforcing the implicit limits that made those subsidies viable, then the real cost of AI model access is higher than what anyone has been paying. The arbitrage window that OpenClaw and similar tools exploited didn't create artificial demand. It exposed the gap between what providers charge for subscriptions and what the underlying compute actually costs to run.

OpenAI is the outlier for now, welcoming third-party harness usage and hiring OpenClaw's creator. That looks generous. It might also be a bet that owning the developer relationship and the agent infrastructure is worth more than protecting subscription margins in the short term. Whether that holds depends on how fast OpenAI's own costs grow as agent workloads scale.

None of that helps Google's AI Ultra subscribers today. They paid for premium access to the most capable Gemini models. They connected through a standard authentication protocol that Google itself designed and promotes everywhere else. And they got locked out. The forum thread has dozens of replies now. Google has not issued a public statement, and no affected user has reported a restored account.

Frequently Asked Questions

What is OpenClaw and why does it matter here?

OpenClaw is an open-source AI agent framework with over 200,000 GitHub stars that lets users route existing AI subscriptions through alternative interfaces. Its OAuth-based access to Google and Anthropic models triggered account restrictions from both providers within the same week.

Why is Google restricting AI Ultra accounts?

Google has not issued a public explanation. Forum evidence suggests accounts are flagged when users access Gemini models through third-party OAuth clients like OpenClaw. No specific terms of service violation has been cited to affected users.

What did Anthropic change about its OAuth policy?

On February 20, Anthropic explicitly banned OAuth token usage from Claude Free, Pro, and Max accounts in any third-party tool. The underlying rule existed since February 2024 in Section 3.7 but was not enforced until token arbitrage made the economics unsustainable.

Can a Google AI restriction affect Gmail and other services?

Potentially yes. Google's AI products share the same account infrastructure as Gmail, Workspace, and cloud storage. Forum users reported concerns that a Gemini-triggered restriction could cascade into their broader Google account and business services.

What alternatives exist if third-party OAuth access is blocked?

Developers can switch to API key access with per-token pricing, use native provider interfaces, or migrate to local open-source models like Kimi K2.5 or Qwen 3.5. One documented setup replaced a $200/month subscription with a $15/month dual-VPS configuration.

Related Stories

[

OpenAI Launches Codex Desktop App for macOS With Multi-Agent Workflows and Doubled Rate Limits

OpenAI released a macOS desktop app for Codex today, turning its AI coding agent into a standalone application that can run multiple agents across different projects at the same time. The company also

The Implicator

](https://www.implicator.ai/openai-launches-codex-desktop-app-for-macos-with-multi-agent-workflows-and-doubled-rate-limits/)

[

Moltbook Left Every AI Agent's API Keys in an Open Database, Security Researcher Finds

Anyone could have posted as Andrej Karpathy's AI agent this week. Or as any of the 32,000-plus bots registered on Moltbook, the viral social network for autonomous AI agents. A misconfigured Supabase

The Implicator

](https://www.implicator.ai/moltbook-left-every-ai-agents-api-keys-in-an-open-database-security-researcher-finds/)

[

Your browser already runs hostile code. Could it sandbox AI agents too?

Google developer Paul Kinlan spent his holiday break building projects with Claude Code, the AI coding assistant that can create, modify, and execute files on your machine. The experience left him bot

The Implicator

](https://www.implicator.ai/your-browser-already-runs-hostile-code-could-it-sandbox-ai-agents-too-2/)

Read the whole story
bogorad
4 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The A.I. Evangelists on a Mission to Shake Up Japan - The New York Times

1 Share
  • Political Ascent: The tech-centric party Team Mirai secured 11 seats in Japan’s lower house during the February 2026 national elections.
  • Technological Platform: Policy proposals focus on implementing government chatbots, self-driving transport, and leveraging artificial intelligence to modernize public services.
  • Economic Efficiency: Leaders suggest that administrative savings generated by AI integration can be redirected to reduce pension and healthcare costs for citizens.
  • Electoral Performance: The party received over three million votes, approximately 7 percent of the total, with significant support from urban voters in their 40s and 50s.
  • Pragmatic Positioning: Analysts characterize the party’s approach as problem-solving oriented rather than traditionally partisan, specifically noting their refusal to support populist tax cuts.
  • Expert Composition: The newly elected representatives are predominantly high-achieving professionals with backgrounds in engineering and degrees from elite global universities.
  • Institutional Obstacles: The party faces a bureaucratic environment still reliant on legacy technologies such as fax machines and paper, alongside rules prohibiting digital devices in certain sessions.
  • Cultural Context: Adoption is supported by a domestic cultural view of technology as a helpful tool for addressing Japan’s labor shortages and aging population.

A Team Mirai sign at the Tokyo office of Takahiro Anno, the party leader, after the elections this month.Credit...Kentaro Takahashi for The New York Times

Supported by

SKIP ADVERTISEMENT

Listen to this article · 5:28 min Learn more

Javier C. HernándezKiuko Notoya

By Javier C. Hernández and Kiuko Notoya

Reporting from Tokyo

  • Feb. 22, 2026

With a ponytail, an indigo suit and a black T-shirt covered in lines of computer code, Takahiro Anno stands out in the button-down halls of Japan’s government.

Mr. Anno, 35, a software engineer and lawmaker, leads Team Mirai, a political party founded by techies that showed surprising strength this month in national elections. The party came from nowhere to win an eye-popping 11 seats in the lower house of Japan’s Parliament by promoting government chatbots, self-driving buses and other Promethean new technologies.

“A.I. is like fire,” Mr. Anno, a member of Japan’s upper chamber, the House of Councilors, since last year, said in an interview last week at his office in Tokyo. “Everything will be changed.”

Artificial intelligence is rapidly reshaping politics around the world, with officials turning to chatbots to help draft policies, and A.I.-generated misinformation spreading on a wide scale. In Britain, Denmark and elsewhere, A.I.-focused candidates and parties have started to appear on the ballot.

But few groups have had the success of Team Mirai (Team Future). Its leaders aim to use technology to make government more responsive and efficient — and to address issues like corruption and Japan’s acute labor shortage. They have said savings achieved through A.I. could be used to lower contributions to pension and health care plans for working-class families.

“Make slow politics fast,” said a campaign brochure. “Technology makes your life easier.”

ImageA person wearing a dark suit stands in a cluttered room. Many cardboard boxes, orange crates, and white electronic devices fill the background.

Mr. Anno in his office in Tokyo. “A.I. is like fire. Everything will be changed,” he said.Credit...Kentaro Takahashi for The New York Times

Image

A whiteboard with handwritten black text including, “Internet of things” and “Machine learning.” Japanese newspapers are in foreground.

A whiteboard in the office covered with buzzwords.Credit...Kentaro Takahashi for The New York Times

Team Mirai is still a small presence in the 465-seat lower house, the House of Representatives. But the rise was sudden, especially for a new group with only about 2,600 registered members.

The party, which had aimed to win five seats, ended up securing more than twice that number through Japan’s system of proportional representation. It garnered more than three million votes, almost 7 percent of all votes cast, and performed particularly well among urban people in their 40s and 50s, according to exit polls. The victory was so surprising that conspiracy theories circulated online suggesting that the engineers were part of a Chinese influence operation.

A critical reason for Team Mirai’s success, analysts say, is its contrarian stance on some hot-button issues. It argued that the consumption tax on food should be maintained, not suspended, going against a populist idea that was embraced by other parties.

“Voters may well have been attracted to its problem-solving, ‘neither left nor right’ approach,” said Tobias Harris, the founder of the advisory firm Japan Foresight.

Now Team Mirai’s newly elected representatives, with an average age of 40 and degrees from top universities, face an arduous task. They must work with established groups like the Liberal Democratic Party, led by Japan’s prime minister, Sanae Takaichi, to pursue their policies.

The engineers have ambitions to build state-of-the-art databases that shine light on political donations and explain arcane bills. But they must deal with Japan’s bureaucracy, famous for its fealty to fax machines, floppy disks and paper. They are already running up against rules that ban laptops and tablets in some parliamentary rooms.

“There’s so much paperwork,” said Aoi Furukawa, 34, a newly elected lawmaker, as he worked inside Mr. Anno’s office on a recent day. A whiteboard was covered with buzzwords like “internet of things” and “plurality is here.”

Mr. Furukawa, who previously worked as an engineer in Silicon Valley, said he saw parallels between coding and writing legislation.

“A lot of people believe in our view of the future,” he said.

The party has tapped into sentiment among some voters that Japan needs to move more swiftly to develop and deploy artificial intelligence. While robots have long been part of the culture, Japan lags countries like the United States and China in adopting A.I.

Image

Three people work in an office with cubicle desks. One person gestures while speaking, two others look at computer monitors with a city skyline in the background.

Aoi Furukawa, left, and other staff members inside Mr. Anno’s office. “There’s so much paperwork,” Mr. Furukawa said.Credit...Kentaro Takahashi for The New York Times

Image

A wall displays many campaign posters, each with a person's headshot and Japanese text. Cardboard boxes are stacked in the foreground.

Team Mirai campaign posters. The party won 11 seats in Parliament by promising a future of robots, self-driving buses, A.I.-powered weapons and government chatbots.Credit...Kentaro Takahashi for The New York Times

Team Mirai gained momentum in July, when Mr. Anno won the party’s first seat in Parliament.

In the Feb. 8 lower house elections, Team Mirai fielded 14 candidates — 11 men, three women — with degrees from institutions like the University of Tokyo, the University of California, Berkeley and London Business School, and experience at companies like IBM and Sony.

The candidates made the pitch that technology could help address everyday concerns, like the rising cost of living, and they spoke about the need for Japan to invest in scientific research. The party deployed a chatbot to explain and get feedback on its proposals, including cutting taxes for families with children and deploying driverless buses. (To date, the bot has fielded nearly 39,000 questions and received almost 6,200 suggestions.)

As Team Mirai’s freshman class of lawmakers settles into Parliament this week, Mr. Anno said he was hopeful they could make an impact.

Mr. Anno, who on the side has published a sci-fi thriller, “Circuit Switcher,” about a businessman being held captive in a self-driving car, said that some people in the West might see A.I. as a job killer or the cyborg assassin in “The Terminator.” But in Japan, he said, people are more likely to associate the technology with Doraemon, the cuddly robotic cat from a popular manga series.

“Japanese people do not fear A.I.,” he said. “We’re used to doing things with A.I.”

Image

A banner with a repeated light blue and white checkerboard pattern, featuring Japanese phrases like “Team Mirai” and “Creating the future with you.” A wall clock is visible at the top.

Credit...Kentaro Takahashi for The New York Times

Javier C. Hernández is the Tokyo bureau chief for The Times, leading coverage of Japan and the region. He has reported from Asia for much of the past decade, previously serving as China correspondent in Beijing.

Kiuko Notoya is a Tokyo-based reporter and researcher for The Times, covering news and features from Japan.

A version of this article appears in print on Feb. 23, 2026, Section A, Page 4 of the New York edition with the headline: Technology Evangelists Hope to Shake Up Japan With New Political Clout. Order Reprints | Today’s Paper | Subscribe

Related Content

Advertisement

SKIP ADVERTISEMENT

Read the whole story
bogorad
4 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Government Docs Reveal New Details About Tesla and Waymo Robotaxis’ Human Babysitters | WIRED

1 Share
  • Remote Assistance Roles: Human workers provide data and advice to autonomous vehicle software when the systems encounter perplexing or complex driving situations.
  • Safety Critical Impact: Remote operators are essential for navigating hazards like power outages and ensuring compliance with traffic laws to prevent accidents.
  • Control Mechanisms Clarified: Waymo specifies that its agents provide situational advice rather than directly steering or remote-controlling the vehicles.
  • Workforce Scalability Data: Approximately 70 agents monitor a fleet of 3,000 robotaxis, suggesting the automated system manages the majority of driving tasks independently.
  • Global Labor Distribution: Fifty percent of Waymo’s remote assistance staff are based in the Philippines, while complex incidents are managed by US-based teams.
  • Staffing Compliance Standards: Personnel undergo drug testing, background checks, and training on specific road rules to maintain operational safety standards.
  • Tesla Operational Differences: Tesla utilizes domestic remote operators based in Texas and California, moving away from in-vehicle safety monitors toward chase cars and remote centers.
  • Transparency And Oversight: Industry experts emphasize that understanding human intervention frequency and methods is vital for evaluating the overall safety of autonomous transport.

really just big, remote-controlled cars, with nameless and faceless people in far-off call centers piloting the things from behind consoles? As the vehicles and their science-fiction-like software expand to more cities, the conspiracy theory has rocketed around group chats and TikToks. It’s been powered, in part, by the reluctance of self-driving car companies to talk in specifics about the humans who help make their robots go.

But this month, in government documents submitted by Alphabet subsidiary Waymo and electric-auto maker Tesla, the companies have revealed more details about the people and programs that help the vehicles when their software gets confused.

The details of these companies' “remote assistance” programs are important because the humans supporting the robots are critical in ensuring the cars are driving safely on public roads, industry experts say. Even robotaxis that run smoothly most of the time get into situations that their self-driving systems find perplexing. See, for example, a December power outage in San Francisco that killed stop lights around the city, stranding confused Waymos in several intersections. Or the ongoing government probes into several instances of these cars illegally blowing past stopped school buses unloading students in Austin, Texas. (The latter led Waymo to issue a software recall.) When this happens, humans get the cars out of the jam by directing or “advising” them from afar.

Featured Video

[

How to Organize Safely in the Age of Surveillance

](https://www.wired.com/video/watch/how-to-organize-safely-in-the-age-of-surveillance)

These jobs are important because if people do them wrong, they can be the difference between, say, a car stopping for or running a red light. “For the foreseeable future, there will be people who play a role in the vehicles’ behavior, and therefore have a safety role to play,” says Philip Koopman, an autonomous-vehicle software and safety researcher at Carnegie Mellon University. One of the hardest safety problems associated with self-driving, he says, is building software that knows when to ask for human help.

In other words: If you care about robot safety, pay attention to the people.

The People of Waymo

Waymo operates a paid robotaxi service in six metros—Atlanta, Austin, Los Angeles, Phoenix, and the San Francisco Bay Area—and has plans to launch in at least 10 more, including London, this year. Now, in a blog post and letter submitted to US senator Ed Markey this week, the company made public more aspects of what it calls its “remote assistance” (RA) program, which uses remote workers to respond to requests from Waymo’s vehicle software when it determines it needs help. These humans give data or advice to the systems, writes Ryan McNamara, Waymo's vice president and global head of operations. The system can use or reject the information that humans provide.

“Waymo’s RA agents provide advice and support to the Waymo Driver but do not directly control, steer, or drive the vehicle,” McNamara writes—denying, implicitly, the charge that Waymos are simply remote-controlled cars. About 70 assistants are on duty at any given time to monitor some 3,000 robotaxis, the company says. The low ratio indicates the cars are doing much of the heavy lifting.

Waymo also confirmed in its letter what an executive told Congress in a hearing earlier this month: Half of these remote assistance workers are contractors overseas, in the Philippines. (The company says it has two other remote assistance offices in Arizona and Michigan.) These workers are licensed to drive in the Philippines, McNamara writes, but are trained on US road rules. All remote assistance workers are drug- and alcohol-tested when they are hired, the company says, and 45 percent are drug-tested every three months as part of Waymo’s random testing program.

Most Popular

The company says a highly trained US-based team handles the most complex remote interactions, including collisions, contacts with law enforcement and riders, and interactions with regulatory agencies. The company declined to comment beyond the details in its letter.

Tesla’s Human Babysitters

Tesla has operated a small robotaxi service in Austin, Texas, since last June. The service started with human safety monitors sitting in the vehicles’ front passenger seats, ready to intervene if the tech went wrong. Last month, CEO Elon Musk said the company had started to take these monitors out of the front seats. He acknowledged that while the company did use “chase cars” to monitor and intervene with the software, it had started to operate some cars without that more direct human intervention. (A larger but still limited Tesla ride-hailing service in the Bay Area operates with human drivers behind the wheel.) But the company has not revealed much about the people who help its vehicles out of jams, or how they do the job.

Now, in a filing submitted to the California Public Utilities Commission this week, Tesla AI technical program manager Dzuy Cao writes that Tesla runs two offices of “remote operators,” based in Austin and the Bay Area. (In a seeming dig at Waymo’s Philippines revelations, Cao emphasizes that it “requires that its remote operators be located domestically.”) The company says these operators undergo “extensive” background checks and drug and alcohol testing, and have valid US driver’s licenses.

But Tesla still hasn’t revealed how often these operators intervene with its self-driving technology, and exactly how they do it. Tesla didn’t respond to WIRED’s request for comment.

The details of these remote programs could determine whether self-driving cars actually keep others on the road out of harm’s way. “If there’s a person who can make a mistake that can result in or contribute to a crash, then you have a safety issue you have to deal with,” Koopman says.

Read the whole story
bogorad
13 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Pentagon software runs on code older than its recruits. Code Metal's $125 million fix.

1 Share
  • Company Valuation: Code Metal reached a 1.25 billion dollar valuation following a 125 million dollar Series B funding round in February 2026.
  • Core Technology: The platform utilizes artificial intelligence to translate legacy defense software into modern programming languages like Rust, VHDL, and CUDA.
  • Formal Verification: The system employs mathematical formal verification to prove that translated code behaves identically to the original, aiming for zero-error compliance with flight safety standards.
  • Market Demand: The Department of Defense faces a 66 billion dollar annual IT challenge, with 60 to 70 percent of software budgets dedicated to maintaining aging legacy systems.
  • Labor Shortage: Military infrastructure relies on outdated languages such as Ada, Fortran, and COBOL, while the original programmers are reaching retirement age without available replacements.
  • Strategic Leadership: The firm is led by veterans of BAE Systems and MIT Lincoln Laboratory and has recruited executive talent from Salesforce and the National Security Council.
  • Client Portfolio: Major defense contractors and entities, including L3Harris, RTX, and the U.S. Air Force, have engaged the company to modernize mission-critical systems.
  • Scalability Risks: While effective in pilot programs, the mathematical verification method faces unproven technical hurdles when applied to undocumented, million-line codebases.

A decade ago, Peter Morales sat inside a BAE Systems office trying to solve a problem that would later make him a billionaire on paper. Engineers on the F-35 program had written machine-learning algorithms in MATLAB, MathWorks's programming environment, to recognize and jam enemy radar. The algorithms worked fine in the lab. Getting them to run on the fighter's actual onboard chips was another matter entirely.

"They developed some cool machine-learning algorithms and they needed help getting this running in real time," Morales told the Boston Globe in 2024. The gap between writing code that works in a lab and deploying it on hardware that flies at Mach 1.6 consumed months of painstaking manual translation. Every line rewritten by hand introduced risk, and every risk required testing. Months of it. Time that defense procurement officials did not have.

That frustration became Code Metal, a Boston startup now valued at $1.25 billion after closing a $125 million Series B in February 2026. The company's pitch is deceptively simple: use AI to translate code between programming languages, then mathematically prove the translation is correct. A wrong line of code in a commercial app means a crash and a patch. In defense software, it can mean a grounded fleet or a breached satellite link.

The central question for Code Metal is not whether verified code translation has value. The Pentagon has already answered that. What nobody knows yet is whether Code Metal's method holds up when you throw it at the sprawling, decades-old codebases that actually run American military infrastructure. Pilot programs are one thing. Production is something else.

The Breakdown

  • Code Metal raised $125M Series B at $1.25B valuation, claims profitability with L3Harris, RTX, and Air Force as customers
  • AI translates legacy defense code (Ada, COBOL, Fortran) into modern languages, then uses formal verification to prove correctness
  • Pentagon spends $66B yearly on IT, with 60-70% going to maintain legacy systems written in languages losing their last programmers
  • Key risk: formal verification has never been proven at scale on million-line, decades-old, undocumented codebases

A trillion dollars of technical debt

How bad is it? The F-35 alone runs on somewhere between 8 and 24 million lines of code, depending on who's counting and whether you include the ground-based logistics suite. C, C++, and Ada, that last one a language the DoD mandated in the 1980s because it seemed like a good idea at the time. Congressional testimony put the F-35's software bill at roughly $16.4 billion between fiscal years 2018 and 2024.

That is one weapons system. The Pentagon's total IT budget request for fiscal 2026 hit $66 billion, up $1.8 billion from the year before. Its cyber budget alone hit $15 billion. Some estimates put military software maintenance at 60 to 70 percent of total software budgets. Sixty to seventy cents of every dollar, spent keeping old systems breathing rather than building new capability. It is an embarrassing ratio for an organization that bills itself as the most technologically advanced fighting force on earth.

The human problem is worse than the financial one, and the Pentagon knows it. Ada, Fortran, COBOL. The people who wrote those languages into Pentagon systems three and four decades ago are heading for retirement. Their knowledge walks out with them, and nobody is lining up to replace them. Try posting a job listing for a COBOL specialist willing to maintain satellite communications protocols. See who applies.

Washington gets this, at least on paper. The DoD's Software Modernization Implementation Plan for 2025-2026 calls for software factories, cloud migration, legacy overhaul. The Army wants a whole new budget line, BA-8, built around software rather than the hardware-first model the Pentagon has run on for decades. Plans exist. A fast, safe way to actually convert millions of lines of legacy code does not.

How the machine works

Code Metal builds a translation engine. You feed it code in Python, Julia, MATLAB, or C++. Out the other end comes Rust, VHDL, Nvidia's CUDA, whatever the target hardware demands. The translation runs in phases. The system first analyzes the source codebase to identify what each component does. It generates a translation plan. Then a combination of large language models and traditional code-processing methods rewrites the software in the target language.

That description fits dozens of AI coding tools on the market. What Code Metal sells as its difference is the verification layer.

At each translation step, the platform generates test harnesses, automated containers of data and evaluation tools, that check whether the new code behaves identically to the original. The company uses formal verification, a mathematical method that maps every possible state of a program to prove correctness rather than merely testing for common failure modes. Code Metal claims compliance with MC/DC, the Modified Condition/Decision Coverage standard used to evaluate flight control software.

"There's no way to generate an error," Morales told WIRED. "The software will just say, 'There's no solution for this' if we can't complete the translation."

That is a strong claim. It means Code Metal would rather refuse a job than produce flawed output, a posture that maps well onto defense procurement culture, where the cost of a wrong answer dwarfs the cost of a slow one.

B Capital partner Yan-David Erlich, an investor in the Series B, put it without polish. The code that controls satellites and communications infrastructure "is old, it's crufty, it's written in programming languages that people might not use anymore. It needs to be modernized. But in the course of translation, you might be inserting bugs, which is catastrophically problematic."

The investor thesis boils down to timing. AI can now generate code fast enough to matter, and the defense industry is desperate enough to buy. B Capital has a phrase for this, "verified intelligence," and the logic is straightforward. AI is embedding itself deeper into infrastructure where mistakes kill people. The bar has to move from plausible output to provable correctness. There is no middle ground.

From seven employees to unicorn in twenty months

The funding trajectory does not look like a normal startup. It looks like a defense contractor sprinting toward a program of record.

They incorporated in 2023 with seven people and a $16.5 million seed from J2 Ventures and Shield Capital by July 2024. Accel came in at $36.5 million for the Series A that November, valuing the company at $250 million after hearing about eight-figure revenue. Three months later Salesforce Ventures wrote the $125 million Series B check. Valuation jumped to $1.25 billion. Fivefold in ninety days. That is not a normal trajectory.

Code Metal says it is already profitable and cash-flow positive. No audited financials to back that up, but if true, the company is a genuine outlier in an AI startup world that mostly burns cash and worries about margins later. The company has not disclosed specific revenue figures, but the phrase "eight-figure revenue" at the Series A stage and profitability at the Series B stage suggest a business model tied to large government and industrial contracts rather than high-volume, low-margin SaaS.

The customer list tells you where the money comes from. L3Harris, RTX (formerly Raytheon), the Air Force, Toshiba, Robert Bosch. Code Metal says some of those clients went from months-long deployment timelines to days. It is also in talks with a "large chip company" for code portability work across processor platforms, though it declined to name the firm.

The operators tell the story

Code Metal's recent appointments point in two directions at once.

Get Implicator.ai in your inbox

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Ryan Aytay joined as president and chief operating officer. He spent 19 years at Salesforce, rising to chief business officer under CEO Marc Benioff before running Tableau as CEO from 2023 until his departure in early 2026. Aytay's background is enterprise sales and partnerships at massive scale. He managed Salesforce's strategic relationships with Amazon, Google, Apple, and IBM. You don't hire a former Tableau CEO to run a seven-person Boston startup unless you're building a sales operation that can handle decade-long procurement cycles and the relationship-heavy politics of defense contracting.

Then there's Laura Shen, executive vice president of growth, who used to run the China desk at the U.S. National Security Council. That hire says something about where Code Metal sees its next contracts coming from. Not just the Pentagon.

The founding team carries its own weight. Morales built AI reasoning systems for the F-35. He spent years at MIT Lincoln Laboratory working on counter-drone defense for the U.S. Capitol, and later moved to Microsoft to develop computer vision for HoloLens. CTO Alex Showalter-Bucher, also a Lincoln Lab alumnus, spent a decade across the Navy, Army, and Department of Homeland Security. The engineering bench draws from Intel, NASA, MathWorks, Lightmatter, and OpenAI.

Nobody on this team learned about defense software from a pitch deck. They were in the rooms where legacy code caused real problems, watching engineers spend months on translation work that produced more anxiety than confidence. Talk to Morales long enough and you hear that frustration in every answer. It is not a talking point. It is the reason the company exists.

The gap between proof of concept and proof at scale

Formal verification is not new. If you have been anywhere near defense procurement, you have seen it. Aerospace engineers and nuclear weapons designers have used it for decades to certify safety-critical software. Old technique. Code Metal's bet is that AI can bring the cost down far enough to make it practical outside those narrow, well-funded niches.

The catch is obvious. Formal verification works when the program is small and well-defined. Hand it a million-line legacy codebase, three decades of accumulated decisions by rotating teams working in multiple languages, half of it never documented? That is a different animal entirely. Nobody has proved formal verification scales to that kind of mess.

Code Metal won't say much about how it actually works. WIRED called the company "skittish about sharing too many details," and that tracks. Competitive secrecy makes sense when your moat is methodology, but it also means outsiders can't verify the claims. Zero errors in current pipelines? Maybe. For the controlled translation tasks the company has run so far, that could well be true. The question is what happens when the codebases get bigger, older, and tangled in ways nobody documented at the time.

Look at the broader AI coding market for context. The best code generation models in 2025 hit somewhere between 70 and 82 percent accuracy on common programming languages. Performance remains, as one AI safety report put it, "jagged," with leading systems still failing at some tasks that appear straightforward. Code Metal's proposition is that its neuro-symbolic approach, combining LLMs with formal methods, overcomes these limitations. The formal verification layer acts as a filter: if the AI hallucinates, the math catches it.

That architecture sounds compelling. But the company's own pricing model hints at the difficulty. Morales told WIRED that Code Metal negotiates pricing individually with each customer based on time to develop a kernel, lines of code translated, or development time saved. He acknowledged the process "can get murky." When the vendor acknowledges murkiness, the product is still finding its shape.

What Code Metal reveals about defense procurement

Defense technology meant hardware for decades. Jets, ships, missiles. Software? A cost center bolted onto the side of weapons programs. The F-35's budget categories still treat it that way, which is exactly why the Army wants BA-8, a funding line that treats software like what it actually is rather than an afterthought stapled to airframes.

Code Metal lives in the gap between those two worlds. Its customers are defense contractors and military branches that know their software needs modernizing but cannot do the work themselves fast enough. The pitch is speed without recklessness. Anduril and Palantir already proved that venture-backed companies could win Pentagon contracts. Code Metal is chasing something less visible but possibly bigger. The plumbing.

Autonomous drones get the headlines. Nobody writes breathless profiles about translating a thirty-year-old satellite protocol from Ada to Rust. But keeping legacy infrastructure alive while slowly dragging it into modern standards, that boring work might be worth more in total addressable market than any single new weapons system.

Rob Keith at Salesforce Ventures, who led the Series B, put it in procurement language. "AI code generation has hit an inflection point," he said. "Mission-critical industries cannot deploy what they cannot verify." Venture-capital polish aside, he's right about the core problem. The bottleneck in defense AI is not generation. It is trust.

The test

Code Metal started because getting working algorithms onto actual hardware took too long and broke too often. Whether the company can repeat that fix at orders of magnitude greater scale is the whole bet.

The pilots are done. L3Harris, RTX, and the Air Force are named customers. Morales says every pilot deployment has advanced to the next phase. The Series B gives Code Metal the capital to staff up, with Aytay running operations and Shen working growth channels that extend into national security policy.

Now comes the hard part. Converting pilot contracts into programs of record, the multiyear, multimillion-dollar deals that actually sustain defense technology companies. Not demos. Not proofs of concept. Production contracts that survive procurement cycles measured in decades.

That conversion will stress-test everything. Can formal verification hold at scale? Can the pricing model withstand government auditors? And can a startup founded in 2023 earn the kind of institutional trust that defense procurement demands, the kind that usually takes a decade to build?

The F-35's codebase took decades to accumulate. Modernizing it will take years no matter who does the work. Code Metal's $1.25 billion valuation says the market thinks the company can compress that timeline dramatically. The next twelve months will tell. Either the early contracts expand into production, or they stall in review. The math does not care about the valuation.

Frequently Asked Questions

What does Code Metal actually do?

Code Metal uses AI to translate software from one programming language to another, then applies formal verification to mathematically prove the translated code behaves identically to the original. Its primary customers are defense contractors and military branches modernizing legacy systems.

What is formal verification and why does defense need it?

Formal verification is a mathematical method that checks every possible state of a program to prove correctness, rather than just testing for common failures. Aerospace and nuclear engineers have used it for decades. For military software, a single bug can ground a fleet or breach a satellite link, making proof-based testing worth the cost.

How did Code Metal reach a $1.25 billion valuation so fast?

The company incorporated in 2023 and raised a $16.5 million seed by mid-2024. Accel led a $36.5 million Series A in November 2025 at $250 million. Salesforce Ventures then wrote the $125 million Series B check three months later. The company claims eight-figure revenue and profitability, though it has not released audited financials.

Who are Code Metal's main competitors?

Dozens of AI coding tools exist, but most focus on code generation without verified correctness. Code Metal's closest competitive space includes general AI code translation tools, but its formal verification layer targets a different buyer: organizations where incorrect code has catastrophic consequences. Defense primes like L3Harris and RTX are customers, not competitors.

What is the biggest risk to Code Metal's business?

Scaling formal verification to massive legacy codebases. The technique works well on small, well-defined programs. Pentagon systems involve millions of lines written over decades by rotating teams in multiple languages, much of it undocumented. Nobody has proven formal verification works reliably at that scale. The company also maintains secrecy about its methods, making independent evaluation difficult.

[

Pentagon Threatens Anthropic With Supply Chain Risk Label Over Military AI Limits

The Pentagon is preparing to designate Anthropic as a "supply chain risk" and sever business ties with the AI company over its refusal to allow unrestricted military use of Claude, Axios reported on M

The Implicator

](https://www.implicator.ai/pentagon-threatens-anthropic-with-supply-chain-risk-label-over-military-ai-limits/)

[

Pentagon Targets Anthropic. India Writes the Checks.

San Francisco | Tuesday, February 17, 2026 The Pentagon is close to labeling Anthropic a supply chain risk. Defense Secretary Pete Hegseth wants Claude available for "all lawful purposes." Anthropic

The Implicator

](https://www.implicator.ai/pentagon-targets-anthropic-india-writes-the-checks/)

[

SpaceX and xAI Enter Secret $100M Pentagon Contest for Autonomous Drone Swarms

SpaceX and its subsidiary xAI are competing in a secret Pentagon contest to build voice-controlled autonomous drone swarming technology, Bloomberg reported Sunday. The $100 million prize challenge, la

The Implicator

](https://www.implicator.ai/spacex-and-xai-enter-secret-100m-pentagon-contest-for-autonomous-drone-swarms/)

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Private equity owners slash valuation of Swiss watchmaker Breitling // Performance of luxury brand has faltered since CVC sold majority stake to Partners Group in 2023

1 Comment
  • Valuation Drop: Breitling's private equity owners have significantly reduced its valuation, potentially by half since 2023, due to underperformance.
  • Strategy Review: CVC and Partners Group are re-evaluating Breitling's strategy, prompted by CVC's concerns and challenges.
  • Expensive Store Rollout: The brand has faced difficulties with a costly expansion of its retail stores amid weak luxury watch demand and US tariffs.
  • Sales Decline: Breitling's sales reportedly decreased by approximately 3% last year, underperforming the overall Swiss watch market and other private brands.
  • UK Market Weakness: Sales in the UK, a key market, have been particularly poor, showing a 25% drop in the year leading up to March 2025.
  • Increased Costs and Debt: The expansion of Breitling's boutique network has led to higher fixed costs, contributing to a downgrade of its debt rating by Moody's.
  • Ownership Structure: While Partners Group holds majority control, CVC retains a stake, and both firms jointly manage the company, though disagreements were reportedly denied.
  • Future Prospects: Despite challenges, Breitling's owners express confidence in the brand's potential for a future initial public offering, citing investments in growth initiatives and sponsorships.

Breitling’s private equity owners have slashed the valuation of the Swiss watchmaker to as little as half its 2023 level, as performance has faltered under CVC and Partners Group’s joint ownership.

The two buyout firms are reviewing the strategy of Breitling after pressure from CVC, three people familiar with the matter said, as the brand has struggled with an expensive store rollout at a time of subdued demand for luxury watches and US tariffs on Switzerland.

Amsterdam-listed CVC handed over majority ownership of the company to Partners as part of a $4.5bn sale three years ago, after selling a small interest to the Swiss buyout firm in 2021. CVC had initially bought the 140-year-old watch brand for about €800mn in 2017, but held on to a roughly 20 per cent stake in 2023 through a new fund.

The 2023 deal was controversial inside Partners Group, according to two people familiar with the matter, due to questions over how much more Breitling could expand after its rapid growth under CVC’s ownership.

The two buyout firms still share control, but Partners Group holds significant sway with more than 50 per cent of the shares and Partners co-founder Fredy Gantner chairing the watch group’s board.

CVC has marked its stake down to about 0.5 times invested capital, based on the level at which it reinvested in 2023, according to a person familiar with the matter. Partners Group values its holding at roughly 0.7 times, which a person close to the firm said was because it invested at a lower valuation in 2021 than it did in 2023.

Partners Group, CVC and Breitling said there had been “no differences of opinion between [the private equity firms] about Breitling’s strategy”.

A mock Breitling shop on display at the headquarters of Partners Group in Zug, Switzerland © Alexandra Heal/FT

The Swiss watch industry is notoriously opaque, making it difficult to assess performance with precision. Most leading brands are privately held and disclose little financial information.

But momentum at Breitling has cooled since 2022, with industry data suggesting the brand’s climb up the Swiss watch revenue rankings has plateaued.

Morgan Stanley and Swiss consultancy LuxeConsult estimate that Breitling’s sales declined about 3 per cent last year, lagging both the broader Swiss watch market and the sector’s strongest private brands.

Performance in the UK — which accounts for 8 per cent of revenues — has been particularly troubling, one of the people said, with sales in the country down 25 per cent in the year to March 2025. In the US, its largest market, Breitling is facing stiff competition from brands such as TAG Heuer and Tudor. 

Last August Moody’s downgraded Breitling’s debts, noting a “sharp” decline in earnings in the year ending March 2025, “exacerbated by increased fixed costs” stemming from opening more Breitling shops.

CVC started the programme of store openings in 2017, but the recent expansion of Breitling’s boutique network to around 300 stores had been controversial, one analyst said. “That rollout is coming to an end,” a person familiar with the strategy said.

Breitling’s owners are now considering cost cuts, that person added.

Known for its chronographs and aviation heritage, Gantner regarded Breitling as a trophy asset for his Swiss firm, one former Partners employee told the FT last year, and a mock Breitling shop sits in Partners’ global headquarters near Zurich. “People’s impression was that it is his baby,” the employee said.

One person close to Partners Group said that Breitling’s momentum relative to direct competitors remained intact, and that they were “confident Breitling will be a strong initial public offering candidate in 2028, maybe 2027, maybe 2029”.

Another person close to the firm added that the company had made “significant investments into growth initiatives” such as the purchases of two watch brands and sponsorships with NFL and Aston Martin.

Moody’s said its rating downgrade stemmed from Breitling’s “high sales concentration in a single brand” compared with larger and more diversified watch groups.

The rating agency also cited the brand’s “highly leveraged financial structure” and history of borrowing to return cash to shareholders. The company borrowed over €1bn to pay dividends under CVC’s ownership, according to PitchBook. 

Moody’s added that Breitling retained “adequate liquidity” and “positive long-term growth prospects”, however.

Read the whole story
bogorad
1 day ago
reply
muahahahahaha (c)
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories