Strategic Initiatives
11901 stories
·
45 followers

Inside Elon Musk’s Optimus Robot Project - WSJ

1 Share
  • Optimus Vision: Musk plans humanoid robots to eliminate poverty, generate “infinite” revenue, and be Tesla’s biggest product.
  • Human-Scale Ambition: Musk envisions Optimus doing factory work, domestic chores, surgeries, and aiding Mars colonization with millions produced annually.
  • Current Limitations: Robots remain hand-built, remotely operated, and struggle with dexterous hands and autonomous operation in complex environments.
  • Company Skepticism: Some Tesla staff question Optimus utility in manufacturing where dedicated robots or current automation suffice.
  • Financial Bet: Musk’s compensation plan ties $1 trillion payout to making Tesla an $8.5 trillion company and selling at least one million robots.
  • R&D Efforts: Tesla trains robots with camera-equipped humans, gathers indoor navigation data, and practices simple tasks like sorting and folding.
  • Industry Context: Other robotics firms focus on wheeled machines, emphasizing stability and safety versus humanoid balance challenges.
  • Market Outlook: Analysts vary, with ARK excluding Optimus from near-term models and Morgan Stanley projecting $7.5 trillion global humanoid revenue by 2050.

By

Becky Peterson

Jan. 2, 2026 9:00 pm ET

Tonia Cowan/WSJ

The future of Tesla is an army of humanoid robots that Elon Musk says could eliminate poverty and the need for work. He has told investors the robots could generate “infinite” revenue for Tesla and have potential to be “the biggest product of all time.”

Musk has bet the company and his personal fortune on this vision of the world in which Optimus, as it is known, works in factories, handles domestic chores, performs surgeries and travels to Mars to help humans colonize the planet. Though today each robot is made by hand, Musk has proposed manufacturing millions of robots a year.

Optimus still has a lot to learn about the world before it is capable of replacing its human creators in the type of full-scale societal shift that Musk has in mind. In public appearances, the robot is often remotely operated by human engineers. On the engineering side, it has proven difficult for Tesla to create a hand for the bot with both the sensitivity and dexterity of a human. Inside Musk’s companies, some employees have questioned the usefulness of the bots for routine business operations like manufacturing. 

Optimus robot on a red carpet at the

Optimus made its red carpet debut in October at the ‘Tron: Ares’ premiere in Hollywood. Alberto E. Rodriguez/Getty Images for Disney

Musk is motivated to prove the skeptics wrong. His new compensation package gives him 10 years to make Tesla a $8.5 trillion company and sell at least one million bots to customers, among other product and financial goals. Success could mean earning Musk a $1 trillion pay package, and expanding Tesla far beyond the electric vehicle industry where it made its mark.

“The car is to Tesla what the book was to Amazon,” Adam Jonas, an analyst with Morgan Stanley, said this summer. “Tesla used cars as a laboratory to get good at other things.”

On Friday, Tesla reported its vehicle sales fell 16% in the fourth quarter and dropped 9% for all of 2025, leaving it behind China’s BYD for the year. Tesla’s share price, which tumbled in early 2025 as the company’s EV sales slumped, had rebounded in recent months amid optimism in Musk’s pivot to robotaxis and humanoid robots.

Created with Highcharts 9.0.1Tesla share priceSource: FactSetAs of Jan. 2, 4 p.m. ET

Created with Highcharts 9.0.1Musk at DOGEFeb. 2025'26200250300350400450500$550

Optimus is still under development, but the bot has become a familiar sight at company events and for many employees. Inside Tesla’s Palo Alto, Calif., engineering headquarters, robots routinely circle the inside perimeter of offices gathering information on how to navigate a room alongside humans.

In Tesla’s labs, the nearly 6-foot-tall machine practices rote tasks like sorting Legos by color, folding laundry and using a drill to screw a fastener, former employees said. In October, the bot made its red carpet debut at the “Tron: Ares” premiere in Hollywood, performing a choreographed fight sequence with actor Jared Leto.

Tesla is one of several companies pushing the frontiers of robotics in the hopes of cornering a nascent market. A crowd of Silicon Valley startups like 1X and Figure, other manufacturers like Hyundai’s Boston Dynamics, and Chinese robotics companies are eager to sell their robots that can fold laundry or manufacture vehicles.

Today, there are limits on how much robots can do. Many factories, including Tesla’s, rely on robotic arms to do heavy lifting or dangerous tasks like moving hot metal. Those robots are largely stationary and programmed to do specific tasks. That leaves humans with jobs that require flexibility and precision, like installing cables or seats into cars moving across an assembly line.

Humanoids have an obvious appeal: bipedal, with flexible joints, robots like Optimus are designed to function more easily in spaces meant for humans. 

Roboticists, however, have struggled to design robots with enough dexterity, sensitivity and adaptability to move around freely, said Ken Goldberg, a roboticist at the University of California, Berkeley.

“I’ve heard Elon Musk say hands are the hard part. It’s true, but it’s not only the hand—it’s the control, the ability to see the environment, to perceive it and then compensate for all this uncertainty. That’s the research frontier,” Goldberg said. “Getting these robots to do something useful is the problem.”

Some Tesla analysts have struggled to price Tesla’s opportunity with humanoids given how new the industry is and exclude it from their financial models. Even Tesla bull ARK Invest, which expects Tesla’s share price to climb to $2,600 from around $400 today, left Optimus out of its model for 2029 because it doesn’t expect the product to be commercially successful until later on.

“We believe initial versions of the robot will likely have a limited set of performable tasks,” Tasha Keeney, a director at ARK Invest, said in an email. “Given Tesla’s competitive advantages in embodied AI and manufacturing scale, we expect the company to be a formidable competitor in the space.”

Morgan Stanley’s Jonas, who now covers the robotics industry, predicts that by 2050, humanoids will bring in $7.5 trillion in annual revenue across the industry globally. Capturing even a fraction of that market could supersize Tesla’s revenues, which came in at $98 billion in 2024.

At first it seemed like a joke. Musk unveiled Tesla’s bot concept at an event in 2021 with a human dancing on stage dressed in a robot costume. 

“It’s intended to be friendly, of course, and navigate through a world built for humans and eliminate dangerous repetitive and boring tasks,” he said at the time. When Musk returned to the stage a year later, he demoed a prototype called Bumblebee, with visible wires and actuators. 

Behind the scenes, Tesla engineers were working out of a kitchenette on campus. Soon after, the expanding group of Optimus engineers moved to a basement, then a large parking lot in another building, one former employee said. Tesla struggled to find the right parts to build its robots, and had to make certain components like its actuators, which power the bot’s movements, from scratch.

The big idea was to take Tesla’s learnings from its self-driving technology, which uses software and cameras to autonomously drive automobiles. Musk told colleagues that its cars were just robots on wheels. 

The humanoids would need to learn how to move around indoor spaces and avoid safety risks like tripping and falling on top of a nearby human or pet. To solve this problem, Tesla hired human data collectors to wear cameras and backpacks, and walk around collecting training data. Tesla had people collecting data in several shifts, running 24/7.

Tesla Optimus robot handing out candy as a man photographs it.

An Optimus robot handing out candy in New York. A big question is how to give robots the dexterity of humans and ability to understand their environment for sensitive tasks. Michael Bucher/WSJ

Another solution was to collect data using Optimus itself. The company set up bots to circle the inside perimeter of its offices learning how to navigate indoors. Sometimes the bot would fall over, after which an engineer would wheel over a robot hoist and pick the bot back up.

In October 2024, at a Warner Bros. sound stage in Burbank, Calif., Musk demonstrated his vision for cities replete with autonomous vehicles and robots. 

In a disco-ball-decorated rotunda, five Optimus bots performed a dance routine to Haddaway’s “What Is Love.” Elsewhere on the lot, the bots served drinks while outfitted in cowboy hats and bow ties.

Behind the scenes, Tesla engineers worked overtime to troubleshoot technical issues, according to people on the ground. While robots in the rotunda were programmed to dance, other robots at the event were teleoperated by engineers who wore body suits and virtual-reality headsets. They guided the bots’ interactions with guests, including while serving drinks behind the bar.

Each robot on the ground required constant monitoring from several engineers: one in a suit teleoperating its movements, one with a laptop, and others standing nearby to keep track of the bot’s physical performance.

Inside Tesla’s lab, Optimus proved pretty good at learning simple tasks. In May, the company shared a video that appeared to show Optimus performing various jobs in response to verbal orders from an engineer, such as putting trash in the bin, cleaning up crumbs, vacuuming, and moving a Model X part from a box. All of the activities were “learned direction from human videos,” according to the company. 

Despite this progress, inside the company, some manufacturing engineers said they questioned whether Optimus would actually be useful in factories. While the bot proved capable at monotonous tasks like sorting objects, the former engineers said they thought most factory jobs are better off being done by robots with shapes designed for the specific task.

The big question, according to Goldberg, the Berkeley roboticist, is how to give robots the dexterity of humans and the ability to understand their environment well enough to complete useful but sensitive tasks, such as clearing a dinner table. “Even a child could clear a dinner table,” said Goldberg, who is also chief scientist at Ambi Robotics and Jacobi Robotics.

Some of Tesla’s competitors have concluded that legs are the problem. Evan Beard is chief executive of Standard Bots, which sells manufacturing robots on wheels. Beard said that wheels make the bots more stable, and therefore safer to work around, and easier to power down if something goes wrong. 

“With a humanoid, if you cut the power, it’s inherently unstable so it can fall on someone,” Beard said. “For a factory, a warehouse or agriculture,” he said, “legs are often inferior to wheels.”

SHARE YOUR THOUGHTS

Do you think a full-scale societal shift like Musk envisions will come to pass? If so, when? Join the conversation below.

Tesla has backed away from its initial Optimus timeline of putting a commercial version to work into its own factories by the end of the year. The company is currently working on its third generation of the robot.

In Tesla marketing materials, Optimus has a role as a domestic worker watering plants, unpacking groceries and handling other household tasks, giving its owners time to hang out with their families.

“Who wouldn’t want their own personal C-3PO/R2-D2?” Musk said in November, referencing the droid characters in “Star Wars” movies. “This is why I say humanoid robots will be the biggest product ever. Because everyone is gonna want one, or more than one.”

Write to Becky Peterson at becky.peterson@wsj.com

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Appeared in the January 3, 2026, print edition as 'Inside Musk’s Pivot To Humanoid Robots'.

Read the whole story
bogorad
8 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Sorry, Zohran, We're Kind of Done With The "Warmth of Collectivism."

1 Share
  • Mayor Mamdani's promise: New York City’s new mayor vowed to replace rugged individualism with collectivism during his swearing-in speech.
  • Authorial skepticism: Speaker suspects the collectivist rhetoric is symbolic virtue signaling benefiting elites while the poor remain underserved.
  • Pendulum framework: The book “Pendulum” interprets Strauss and Howe’s 80-year cycle as alternating 40-year “We” (collectivism) and “Me” (individualism) eras.
  • Collectivism trajectory: Authors argue collectivism begins with solving inequality, then deteriorates into oppressive ideologies before yielding to individualism.
  • Witch-hunt phase: The book predicted 2013–2023 would foster judgmental righteousness, citing historical witch hunts, McCarthyism, Nazism, and Stalinism.
  • Recent phenomena: Cancel culture and political prosecutions of opponents are presented as evidence of the predicted “we” zenith’s extremist phase.
  • Social consequences: Compact Magazine noted a “Lost Generation” of young men, especially white men, marginalized in the current collectivist environment.
  • Outlook: The author expresses hope the pendulum will swing back toward individualism and anticipates 2026 as a potential turning point.

In his swearing-in ceremony, the new Mayor of New York City, Zohran Mamdani, said, "We will replace the frigidity of rugged individualism with the warmth of collectivism."

It sent chills down the spines of most of those who understand the history of governments given over to collectivism. I think it’s likely in name only, a virtue signal for those at the top to feel better about income inequality.

Let Zohran meet the needs of the poor and the marginalized while we enjoy our mansion in the Hamptons in peace. They’ll have to cough up the taxes to buy off some of that guilt, but that’s a small price to pay compared to an unruly mob banging down your door.

It struck me a little funny since I’ve been thinking about the book Pendulum: How Generations of the Past Shape Our Present and Predict Our Future.

[

](https://www.amazon.com/Pendulum-Generations-Present-Williams-2012-10-02/dp/B01LP466GG/ref=sr_1_1?adgrpid=189089283969&dib=eyJ2IjoiMSJ9.M1uZZjGgoe_ctMm7StZ1m1TZyAO33W7COJQrpo8EnZ0.fQooFmJ5KJK_m5dOT9qxc9Qn6qv1ajXAuYkSZJ9SRCk&dib_tag=se&hvadid=779749931189&hvdev=c&hvexpln=0&hvlocphy=9031623&hvnetw=g&hvocijid=2091545149704726964--&hvqmt=e&hvrand=2091545149704726964&hvtargid=kwd-390831807820&hydadcr=22569_13821285_9027&keywords=pendulum+how+past+generations&mcid=c1c40e3f2cc4370ebb9eefdd95ea52f0&qid=1767326203&sr=8-1)

In it, the authors see the 80-year cycle in Neil Howe and William H. Strauss’ The Fourth Turning as two 40-year cycles: the “We” cycle (collectivism) and the “Me” cycle (individualism).

[

](https://substackcdn.com/image/fetch/$s_!XJnh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139355a-a285-4e66-86fc-3d6fb5db4708_746x314.heic)

They theorize that humans always take a good thing too far. Collectivism starts well, solving problems for the underclass. Then, it becomes toxic and dangerous, like Nazi Germany dangerous. Then the pendulum begins to swing back to individualism, where it thrives for about 20 years, then winds down, as humans take a good thing too far, leading to nihilism, emptiness, and selfishness.

From the book, which was written in 2011:

The second half of the Upswing of “We” and the first half of the Downswing from it (2013–2023) bring an ideological “righteousness” that seems to spring from any group gathered around a cause. The inevitable result is judgmental legalism and witch hunts. The origin of the term witch hunt was the Salem witch trials, a series of hearings before county court officials to prosecute people accused of witchcraft in the counties of Essex, Suffolk, and Middlesex in colonial Massachusetts, between February 1692 and May 1693,6 exactly at the beginning of the second half of the Upswing toward the “We” Zenith of 1703.

Senator Joseph McCarthy was an American promoter of this witch-hunt attitude at America’s most recent “We” Zenith of 1943 (see the “House Un-American Activities Committee,” 1937–1953); Adolf Hitler was the German promoter (see the Holocaust, 1933–1945); and Joseph Stalin was the Soviet promoter (see the Great Purge, 1936–1938). Our hope is that we might collectively choose to skip this development as we approach the “We” Zenith of 2023. If enough of us are aware of this trend toward judgmental self-righteousness, perhaps we can resist demonizing those who disagree with us and avoid the societal polarization that results from it. A truly great society is one in which being unpopular can be safe.”

According to this theory, the “We” cycle peaked in 2013, just as Obama was about to enter his second term, and as Critical Race Theory began infiltrating schools, and, it was right around the time Helen Andrews says the Great Feminization began.

Michael R. Drew and Roy H. Williams, the authors of Pendulum, also targeted the height of the “witch hunt phase” at 2023, which seems to be exactly when cancel culture escalated into the indictments against Trump and brazen lawfare. Cancel culture seems like child’s play compared to trying to put your political opponent in prison and throw him off the ballots.

Even though the book Pendulum was written back in 2011, they seemed to know back then that the time we’re living through now would be very bad. Maybe not Nazi Germany bad, but Tyler Robinson and Matthew David Crooks bad. Collectivism has no choice but to go to that awful place where they demand compliance and conformity.

Rugged individualism not only built this country, but it also built New York City. It’s the land of opportunity. If you can make it there, you can make it anywhere. Deciding New York, of all places, will be a socialist hellhole is a choice the voters made. They’ll have to live with it, good or bad or ugly.

According to Pendulum, we’re in the “collectivism gone wrong” phase, and we’re moving back toward the individualism phase — and honestly, not a moment too soon.

The downstream effects of our collectivist phase are only now beginning to show themselves. There was a good piece in Compact Magazine about the Lost Generation of white men who were told not even to bother applying to jobs because they wouldn’t even be considered.

[

](https://www.compactmag.com/article/the-lost-generation/)

I know a lot of young men who have spent the last ten years aimless, rootless, with no place to land. It isn’t only white men, but it’s especially white men, and according to the Left, we’re not supposed to care. We have to care.

Right now, Zohran is a man with a dream. It happens to be an old dream, one proven again and again to fail, but I guess we’ll have to wait and see whether he can really change New York City for the better or whether it will just be another role he plays because it sounded good at the time.

Start spreading the news…

Anyway, Happy New Year. I have high hopes for 2026. Maybe this will be the year things start to turn around. Thank you for being such faithful readers. I so appreciate it. I meant to get my podcast out tonight, but it’s not quite ready. Until next time…

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

In Ukraine, an Arsenal of Killer A.I. Drones Is Being Born in War Against Russia - The New York Times

1 Share
  • Semiautonomous drone deployment: Ukrainian forces now rely on drones that can switch from pilot control to AI-guided terminal strikes, exemplified by Bumblebee and other systems achieving over 1,000 combat flights.
  • Underdog module: NORDA Dynamics created an aftermarket module that enables F.P.V. drones to lock onto targets and complete terminal attacks autonomously, extending range to roughly 2 kilometers.
  • Swarm coordination: Sine Engineering’s Pasika app lets a single operator autonomously launch, loiter, and transfer control among dozens of drones, creating rapid-sequence swarm strikes.
  • Counter-jamming navigation: Vermeer’s Visual Positioning System equips deep-strike drones with camera-based navigation that bypasses GPS denial, guiding missions across contested airspace.
  • Schmidt-backed platforms: Eric Schmidt’s venture supplies Bumblebee, Hornet, Merops, and other AI-enabled drones to elite Ukrainian brigades, combining target recognition, terminal guidance, and jamming resistance.
  • Human oversight emphasis: Developers and operators maintain humans designate targets, and Ukrainian doctrine aims to keep people in the loop despite growing autonomous capabilities.
  • Ethical concerns: Technologists and activists warn that AI-enabled targeting raises moral questions about dehumanized kill decisions, especially once algorithms pursue people independently.
  • Strategic implications: Ukrainian leaders treat AI-enhanced drones as essential to national defense, while Western observers worry existing militaries are unprepared for the emerging autonomous warfare landscape.

In the past year, drone warfare in Ukraine has undergone a chilling transformation.

Most drones require a human pilot. But some new Ukrainian drones, once locked on a target, can use A.I. to chase and strike it — with no further human involvement.

This is the story of how the battlefield became the birthplace of a powerful new weapon.

On a warm morning a few months ago, Lipa, a Ukrainian drone pilot, flew a small gray quadcopter over the ravaged fields near Borysivka, a tiny occupied village abutting the Russian border. A surveillance drone had spotted signs that an enemy drone team had moved into abandoned warehouses at the village’s edge. Lipa and his navigator, Bober, intended to kill the team or drive it off.

Another pilot had twice tried hitting the place with standard kamikaze quadcopters, which are susceptible to radio-wave jamming that can disrupt the communication link between pilot and drone, causing weapons to crash. Russian jammers stopped them. Lipa had been assigned the third try but this time with a Bumblebee, an unusual drone provided by a secretive venture led by Eric Schmidt, the former chief executive of Google and one of the world’s wealthiest men.

Bober sat beside Lipa as he oriented for an attack run. From high over Borysivka, one of the Bumblebee’s two airborne cameras focused on a particular building’s eastern side. Bober checked the imagery, then a digital map, and agreed: They had found the target. “Locked in,” Lipa said.

With his right hand, Lipa toggled a switch, unleashing the drone from human control. Powered by artificial intelligence, the Bumblebee swept down without further external guidance. As it descended, it lost signal connection with Lipa and Bober. This did not matter: It continued its attack free of their command. Its sensors and software remained focused on the building and adjusted heading and speed independently.

Another drone livestreamed the result: The Bumblebee smacked into an exterior wall and exploded. Whether Russian soldiers were harmed was unclear, but a semiautonomous drone had hit where human-piloted drones missed, rendering the position untenable. “They will change their location now,” Lipa said. (Per Ukrainian security rules, soldiers are referred to by their first name or call sign.)

Throughout 2025 in the war between Russia and Ukraine, in largely unseen and unheralded moments like the warehouse strike in Borysivka, the era of killer robots has begun to take shape on the battlefield. Across the roughly 800-mile front and over the airspace of both nations, drones with newly developed autonomous features are now in daily combat use. By last spring, Bumblebees launched from Ukrainian positions had carried out more than 1,000 combat flights against Russian targets, according to a manufacturer’s pamphlet extolling the weapon’s capabilities. Pilots say they have flown thousands more since.

Bumblebee’s introduction raised immediate alarms in the Kremlin’s military circles, according to two Russian technical intelligence reports. One, based on dissection of a damaged Bumblebee collected along the front, described a mystery drone with chipsets and a motherboard “of the highest quality, matching the level of the world’s leading microelectronics manufacturers.” The report noted the sort of deficiencies expected of a prototype but ended with an ominous forecast: “Despite current limitations,” it declared, “the technology will demonstrate its effectiveness” and its range of uses “will continue to expand.”

That conclusion was prescient but understated, for the simple reason that Bumblebees hardly fly alone. Under the pressures of invasion, Ukraine has become a fast-feedback, live-fire test range in which arms manufacturers, governments, venture capitalists, frontline units and coders and engineers from around the West collaborate to produce weapons that automate parts of the conventional military kill chain. Equipped with onboard proprietary software trained on large data sets, and often run on off-the-shelf microcomputers like Raspberry Pi, drones with autonomous capabilities are now part of the war’s bloody and destructive routine.

In repeated visits to arms manufacturers, test ranges and frontline units over 18 months, I observed their development firsthand. Functions now performed autonomously include: pilotless takeoff or hovering, geolocation, navigation to areas of attack, as well as target recognition, tracking and pursuit — up to and including terminal strike, the lethal endpoint of the journey. Software designers have also networked multiple drones into a shared app that allows for flight control to be passed between human pilots or for drones to be organized into tightly sequenced attacks — a step toward computer-managed swarms. Weapons with these capabilities are in the hands of ground brigades as well as air defense, intelligence and deep-strike units.

ImageThree small military attack drones sit on a patch of dirt with a splash of sunlight highlighting the middle of the frame.

Bumblebee attack drones at a combat testing range outside Kharkiv, Ukraine.Credit...Finbarr O'Reilly for The New York Times

Drones under full human control remain far more abundant than their semiautonomous siblings. They cause most battlefield wounds. But unmanned weapons are crossing into a new frontier. And while no publicly known drone in the war automates all steps of a combat mission into a single weapon, some designers have put sequential steps under the control of artificial intelligence. “Any tactical equation that has a person in it could have A.I.,” said the founder of X-Drone, a Ukrainian company that has trained software for drones to hunt for and identify a stationary target, like an oil-storage tank, and then hit it without a pilot at the controls. (The founder asked that his name be withheld for security reasons.)

The Kremlin’s forces are also adopting A.I.-enhanced weapons, according to examinations of downed Russian drones by Conflict Armament Research, a private arms-investigation firm. With both sides investing, Mykhailo Fedorov, Ukraine’s first deputy prime minister, said A.I.-powered drones are at the center of a new arms race. Ukraine’s defenders must field them in large numbers quickly, he said, or risk defeat. “We are trying to stimulate development of every stage of autonomy,” he said. “We need to develop and buy more autonomous drones.”

To be sure, the familiar weapons of modern battlefields, all under human control, have caused immeasurable harm to generations of soldiers and civilians. Even weapons celebrated by generals and pundits as astonishingly precise, like GPS-guided missiles or laser-guided bombs, have often struck the wrong places, killing innocents, often without accountability. No golden age is being left behind. Rather, semiautonomous drones compound existing perils and present new threats. Peter Asaro, vice chair of the Stop Killer Robots Campaign and a philosopher and an associate professor at the New School, warned of rising dangers as weapons enter unmapped practical and ethical terrain. “The development of increasing autonomy in drones raises serious questions about human rights and the protection of civilians in armed conflict,” he said. “The capacity to autonomously select targets is a moral line that should not be crossed.”

The concept of a killer robot is vague and prone to hype, invoking T-800 of “The Terminator,” an adaptive mobile killing machine deployed by an artificial superintelligence, Skynet, that perceives humanity as a threat. Nothing close exists in Ukraine. “Everybody thinks, Oh, you are making Skynet,” said a captain responsible for integrating new technology into the 13th Khartia Brigade of Ukraine’s National Guard, one of the country’s most sophisticated units, in which Lipa and Bober serve. “No, the technology is interesting. But it’s a first step and there are many more steps.”

The captain and other technicians working with A.I.-enhanced weapons said they tend to be brittle, limited in function and less accurate than weapons under skilled human control. Many have a short battery life and brief flight times. Autonomous weapons with sustained endurance, high flexibility and the ability to discern, identify, rank and pursue multiple categories of targets independent of human action have yet to appear, and they would require, the captain said, “a waterfall of money” plus much imagination and time. “It’s like the staircase of the Empire State Building,” he said. “That’s how many steps there are, and we are inside the building but only on the first floor.”

As a safeguard against A.I.-powered weapons slipping the leash, humanitarians and many technologists have long advocated keeping “humans in the loop,” shorthand for preventing weapons from making homicidal decisions alone. By this thinking, a trained human must assess and approve all targets, as Lipa and Bober did, ideally with the power to abort an individual strike and a kill switch to shut an entire system down. Strong guardrails, the argument goes, are necessary for accountability, compliance with laws of armed conflict, legitimacy around military action and, ultimately, for human security.

Schmidt has emphasized the necessity of human oversight. But at the end of a flight, some semiautonomous weapons in Ukraine can already identify targets without human involvement, and many Ukrainian-made systems with human override are inexpensive and could be copied and modified by talented coders anywhere. Some of those designing A.I.-enhanced weapons, who consider their development necessary for Ukraine’s defense, confess to unease about the technology that they themselves conjure to form. “I think we created the monster,” said Nazar Bigun, a young physicist writing terminal-attack software. “And I’m not sure where it’s going to go.”

The Dawn of Autonomous Attack

Bigun’s own journey exemplifies how the duration and particulars of the war incentivized the creation of semiautonomous weapons. When Russia rolled mechanized divisions over Ukraine’s border in 2022, Bigun was leading a team of software engineers at a German tech start-up. In early 2023, he founded a volunteer initiative for the military that eventually manufactured 200 first-person-view (F.P.V.) quadcopters a month. It was a significant contribution to Ukraine’s war effort at a time when low-cost and explosive-laden hobbyist drones, not yet widely recognized as the transformative weapons they are, remained in short supply. His focus might have remained there. But as he and his peers heard from frontline drone pilots, they became concerned about declining success rates of kamikaze drones in the face of defensive adaptations, and they joined the search for solutions.

The problems were many. As more drones took flight, both sides developed physical and electronic countermeasures. Soldiers erected poles and strung mesh to snag drones from the air, and they covered the turrets and hulls of military vehicles with protective netting, grates or welded cages. Among the most frustrating countermeasures were jammers that flooded the operating frequencies used for flight control and video links, generating electronic noise that reduced signal clarity in pilot-to-drone connections. The systems became standard around high-value targets, including command bunkers and artillery positions. They also appeared on expensive mobile equipment, like air-defense systems, multiple-rocket launchers and tanks.

Image

The rear side of a military vehicle driving down a road covered with protective netting, with fields on either side of it.

A Ukrainian military vehicle traveling under a web of netting meant to protect against drone attacks near the frontline city Pokrovsk.Credit...Finbarr O'Reilly for The New York Times

This complex puzzle led to the creation of drones that fly on fiber-optic cables, one solution that has appeared on the battlefield. It also fueled Bigun’s interest in a form of computer-enabled attack, known as last-mile autonomous targeting, in which computer vision and autonomous flight control would guide drones through the final stage of attack without radio-signal inputs from a pilot. Such systems promised another benefit as well: They would increase the efficacy of strikes at longer range and over the radio horizon, when terrain or the Earth’s curvature interfere with a steady radio signal.

In theory, the technical remedy was simple. When pilots anticipated a break in communications, they could pass flight control to an automated substitute — a powerful chipset and extensively trained software — that would complete the mission. With this tech coupled to onboard sensors and a camera, the pilot could lock the mini-aircraft on a target and release the drone to strike alone. The company Bigun co-founded in 2024, NORDA Dynamics, did not manufacture drones, so it set to work creating an aftermarket component to attach to other manufacturers’ weapons. With it, a pilot would still fly the drone from launch until it neared a target. Then the pilot would have the option of autonomous attack.

Boosted by funding from the Ukrainian government and venture capital firms, NORDA spent much of 2024 testing a prototype that evolved into its flagship offering, Underdog, a small module that fastens to a combat drone. When flying an Underdog-equipped quadcopter, a pilot with F.P.V. goggles still controls the weapon from takeoff almost to destination. But in a flight’s final phase, the pilot has the choice — via an onscreen window that zooms in on objects of interest, like a building or car — of approving an autonomous attack in a process called pixel lock. At that moment, Underdog takes over.

Underdog began with tests on stationary objects, but after repeated updates, its software chased moving quarry. Range extended, too. Early modules allowed 400 meters of terminal attack; by summer 2025, with the fifth version of NORDA’s software, pixel lock reached 2,000 meters — about 1.25 miles. By then the modules had been distributed to collaborating F.P.V. teams at the front. “We have some very good feedback,” Bigun said. A company bulletin board listed early hits, among them Russian artillery pieces, trucks, mobile radar units and a tank.

Image

Several men and women and a dog sit on folding chairs in a wooded area talking.

Nazar Bigun (center), co-founder and chief executive of the Ukrainian defense-tech start-up NORDA Dynamics, with colleagues on lunch break from a nearby arms expo.Credit...Finbarr O'Reilly for The New York Times

One summer afternoon in western Ukraine, Bigun and several employees arrived at a tree line of wild pear, apple and plum dividing agriculture fields. Cows meandered past, swishing tails to shoo flies. Two white storks glided to the ground, alighted and picked their way through the furrows, hunting. NORDA’s technicians sent a black S.U.V. with a driver and two-way radio to drive along the fields.

A test pilot, Janusz, a Polish citizen who had volunteered as a combat medic in Ukraine before joining the company, sat in the van wearing goggles and holding a hand-held radio controller. Once the S.U.V. drove away, he commanded an unarmed F.P.V. drone through liftoff. “I’m flying,” he said over the radio.

The video feed showed golden fields and green windbreaks, overlaid with dirt roads. The drone climbed to about 200 feet. Its camera revealed the black S.U.V. less than a mile away. Onscreen, Janusz slipped a square-shaped white cursor over the image of the vehicle. A pop-up window appeared in the upper-left corner containing a stabilized close-up of the S.U.V. With his left hand, Janusz selected pixel lock. The word “ENGAGE” appeared within a red banner onscreen. Thin black cross-hairs settled on the center of the S.U.V.

Janusz lifted his hands from the controller. From an altitude of about 215 feet, the drone entered a slow dive. Within seconds it had flown almost to the moving S.U.V.’s windshield. Janusz switched back to human piloting and banked the quadcopter away, sparing the vehicle damage from impact.

At his command, the drone climbed, spun around and resumed pursuit, this time from more than 500 feet up. Its prey bounded along a road. Janusz lined up the cursor and engaged pixel lock again. The drone entered a second independent dive, accelerating toward the fleeing car. Once more Janusz overrode the software at the last moment. The quadcopter buzzed so closely that the whine of its engines was picked up by the vehicle’s two-way radio and broadcast inside the pilot’s van. Janusz smiled.

He swung the drone around, showing the van he sat in. The cursor briefly presented the possibility of pixel lock on himself. Janusz chuckled and steered the weapon away, back toward the S.U.V. The driver’s voice crackled over the radio. “We will right now make a turn,” he said.

For the next half-hour, the driver’s maneuvers made no difference. No matter what he did, the drone, once pixel-locked, closed the distance autonomously, harassing the moving vehicle with the tenacity of an obsessed bird.

Compared with conditions common in war, the field exercise was simple. Groundspeeds were slow, flights were by daylight, no power lines or tree branches blocked the way. The drone maintained a constant line of sight with the S.U.V., and the software had to lock on a lone vehicle, not on a target weaving through traffic or passing parked cars. But with more training and computational power, the software could be improved to discern and prioritize military targets based on replacement cost or degree of menace, or fine-tuned to strike armored vehicles in vulnerable places, like exhaust grates or where turrets meet hulls. It might be trained to hunt most anything at all — a bus, a parked aircraft, a lectern where a speaker is addressing an audience, a step-down electrical transformer distributing power to a grid.

Image

A masked man in a T-shirt and shorts stands in a green field under a blue sky holding an airplane-shaped drone above his head.

An employee from NORDA Dynamics launching a Dart-2 fixed-wing strike drone for engineers to fine-tune their automated flight software. Credit...Finbarr O'Reilly for The New York Times

For Bigun, the natural worry that such technology could be turned against civilians has been overridden by the imperatives of survival. Beyond coding, his work involves interacting with arms designers from Ukraine and the West, including at weapons expositions, where he seeks partners and clients. But he often visits the Field of Mars, a cemetery in Lviv that is a repository of solemn memory and raw pain.

Bigun’s great-uncle was a Ukrainian nationalist during the totalitarian rule of Stalin. For this, Bigun’s grandfather was deemed an enemy of the state by association, and shipped to Siberia at age 16. Both men are buried on the grounds, where they have been joined by a procession of soldiers killed since the full invasion. On an evening following one of Bigun’s arms-show appearances, mourners at the field sanded tall wooden crucifixes by hand, then reapplied lacquer; a widow sat beside a grave talking to her lost husband as if he were sipping tea in an opposite chair; a family formed a semicircle around a plot covered in flowers with each member taking turns catching up a deceased soldier on household news. The field reached full capacity in December — almost 1,300 graves — prompting Lviv to open a second cemetery for its continuing flow of war dead. Just before Christmas, the second field held 14 fresh mounds.

Bigun abhors the need for these places. But beside commemorations of friends snatched early from life by the war, he said, he finds inspiration to continue his work. “This is where I feel the price we pay,” he said, “and it motivates me to move forward.” By the end of the year, NORDA Dynamics had provided frontline units fighting in the East more than 50,000 Underdog modules.

The Rise of the Swarm

The Ukrainian military’s hard pivot to drone warfare helped save the nation. For almost four years, while fielding the world’s first armed force to reorganize around unmanned weapons, it blunted the ground assaults of Russia’s far larger military. It continues to do so even as the Kremlin replenishes its thinned divisions, drawing from oil-state revenue and a population at least four times the size of Ukraine’s.

But the weapon has a limit. Almost all short-range kamikaze drones — a primary means of stopping advancing Russian soldiers — are flown by individual pilots, one at a time. Each is a vicious aerial acrobat: From airspeeds up to 70 miles an hour, small multicopters can stop, hover, turn and fly off in new directions for minutes on end, traits empowering pilots to find, chase and kill their human victims with chilling efficacy. And yet during sustained Russian attacks, typical frontline conditions can force drone teams to fight slowly. The pace is set by the duration between each drone’s launch and final approach, which at common standoff distances often stretches past 20 minutes. When Russian soldiers infiltrate in large numbers, single-drone strike sequences can feel slow and insufficient. Between sorties, enemies escape.

Given the enduring challenge of massing drone firepower, designers of autonomous combat-drone technology have sought to assemble drones into swarms, the allure of which is obvious to a nation under attack. Even small swarms would allow pilots to concentrate multiple weapons in punishing rapid-fire strikes, stiffening defenses and raising the prospect of overwhelming machine-only attacks.

Not long after Janusz’s terminal-attack flights, technicians from another Ukrainian company, Sine Engineering, gathered near a rural village to train drone pilots on its entrant to the swarm-tech field, called Pasika, the Ukrainian word for apiary. The heart of Pasika’s hardware is its radio modems — small frequency-hopping transceivers that act as beacons for flying drones. In flight, each quadcopter’s altitude and location update several times a second by measuring the differences in arrival times of radio signals from several known positions. Pasika software also provides automated flight control. At its current stage of development, a sole pilot can manage dozens of drones through autonomous launch, navigation and hovering — a pre-attack phase during which massed drones loiter pending instructions.

During a multiday training session for quadcopter teams fresh from frontline duty, Pavlo, a former infantryman who serves as Sine Engineering’s liaison to combat brigades, coached pilots as they practiced. The tech was futuristic but the scene was characteristically rural Ukrainian. The test range, hectares of hayfield and sunflower, was not secured behind fences or watched over by control towers. The students, including pilots from Kraken 1654 Unmanned Systems Regiment of the Third Army Corps and Samosud Team, a drone unit of the 11th National Guard Brigade, worked in casual clothes and colorful T-shirts, eating artisanal pizzas as they tinkered. A small horse stood tethered to a stake beside a wooden cart, chewing grass into a flat circle.

Via an app, the pilots took turns ordering drones to launch and navigate to a point on the map. Unaided by human hand, quadcopters rose in the air and sped off over the countryside to loiter together about a mile away. Tablet screens revealed their progress. A video feed from each quadcopter showed rolling cropland below. The pilots kept their hands off the controls. A rusty tractor puttered past.

Image

Three men sit with their backs to the camera at a folding plastic table covered with computers, monitors and drone remote controls in a green field.

Soldiers attending training with Sine Engineering, a Ukrainian defense-tech company. Sine’s product, Pasika, lets a single operator manage several drones at a time, hitting targets in quick-succession “swarm attacks.”Credit...Finbarr O'Reilly for The New York Times

For a final exercise, a pilot with the call sign Kara directed her team to gather two drones autonomously over an opposite field. Another team flew three more. Once the drones reached their loitering point, Kara said, pilots would take control and fly them manually toward targets to practice a massed attack.

Pasika also allows pilots on the app to exchange control of individual drones among themselves. In this way, any pilot could use them to attack across a short distance with brief intervals between strikes. The concept could be extended further. With Pasika or a similar system, quadcopters stockpiled in boxes near a front could be commanded by a sole operator, whether A.I. or human, creating a dense swarm of drones to face ground attacks without delay.

Multiplying a sole soldier’s combat power in these ways made sense to Kara, whose brother, husband and husband’s twin brother all rose to resist the Russian invasion. Ukraine grants humanitarian discharges to siblings of service members killed in action, and when her brother-in-law was killed, her husband transferred to the reserve. Upon his return home, Kara enlisted and became a drone pilot. Her call sign means retribution.

Pavlo, too, had his motivations. In summer 2022, he was almost killed by conventional military incompetence when his commander gathered more than 300 soldiers in buildings in Apostolove, a city north of Crimea. Dense public garrisoning offers rich targets for long-range weapons, which arrived as a barrage of S-300 missiles. Pavlo was inside a middle school when the first missile exploded in the yard. He huddled with others under a stairwell. The next missile hit the school squarely, leaving him pinned under rubble as yet another roared in and exploded. At least four peers in the stairwell died. Pavlo suffered burns on his head, torso and arms. After convalescing, he trained as a drone pilot and began flying reconnaissance missions behind Russian lines. Experience told him Ukraine needed high-tech solutions to secure its future. “It woke something in me,” he said, of nearly dying because of an old-school infantry commander’s lazy mistake. “An instinct for survival on a more sophisticated level.”

Can A.I. Replace GPS?

In late 2023, Brian Streem, the founder of a niche drone-cinematography business, was meeting with a Latvian company about putting a visual navigation system on long-range drones when one of his hosts suggested he bring his ideas to Ukraine. Deep-strike drones, essentially slow-flying cruise missiles that can travel hundreds of miles, require precise navigation to move through foreign airspace for hours, and to follow zigzag routes that change altitude frequently to evade air defenses. Ukraine had high hopes for its growing deep-strike arsenal to target Russian fuel and arms depots. But Russia had shut down GPS over the front and its western territory. Vaunted GPS-reliant systems were failing. The Latvian company thought Streem had a solution.

Streem was new to the arms business, and his journey had been anything but direct. A native of Bayside, Queens, he graduated from New York University’s Tisch School of the Arts in 2010 and started producing independent films and commercials. His work indulged a fascination for difficult photographic challenges, which led him to drones and the creation of a company, Aerobo, that shot aerial footage for music videos and Hollywood productions, including “A Quiet Place,” “Spider-Man: Homecoming” and “True Detective.”

Image

A man with glasses and a rumpled khaki shirt stands in front of a bulletin board covered with posted notes looking off-camera.

Brian Streem, the founder of Vermeer, a company that develops visual positioning systems (V.P.S.) that enable drones to navigate without GPS assistance.Credit...Finbarr O'Reilly for The New York Times

Aerobo’s services were in demand. Streem might easily have settled in, but he found working in Hollywood to be stressful and sometimes stultifying. Every shoot had to be both technically perfect and aesthetically pleasing, even beautiful. “If you think military drone missions are hard,” he says, “try making Steven Spielberg happy.” Moreover, the available tech, though expensive, felt janky and resistant to graceful use. Quadcopters were just beginning to enter markets, and the larger drones Aerobo flew required at least two people — a pilot controlling the airframe and an operator moving a camera system on a gimbal, “in this kind of dance,” Streem said. Frustrated with the equipment and unsure how to scale up his business, he shuttered Aerobo in 2018, renamed the company Vermeer and started developing software to make drone cinematography more intuitive.

Revenue dried up. In 2019, feeling desperate, he attended an investor speed-dating seminar in Buffalo and sat across from Warren Katz, an entrepreneur who directed a U.S. Air Force start-up accelerator that funded military tech. Streem explained what he was working on. Katz suggested that the Air Force “would be very interested in what you’re doing.” Streem was astonished. He was not a weapons designer; one of his last professional collaborations was a music video by the rapper Cardi B. “If you’re coming to me for help,” he said, “we’re in a lot of trouble.”

“Well,” Katz said, “you might be surprised.”

Katz urged him to apply to a program for start-ups of Air Force interest. Streem hastily filled out forms. A month later, Vermeer was selected. Streem moved to Boston, began meeting officials from the Pentagon and was at once startled and pulled in. “I realized, OK, I don’t know much about A.I.,” he said. “But as I am talking to these people, I’m kind of thinking to myself, I don’t think they know much about A.I., either.”

Streem is amiable, persistent and energized by a seemingly instinctual inclination for sales. When Covid closed offices, he retreated to a lakeside cabin and created an internet-scraping tool that yielded the email addresses of more than 50,000 military officers. From social isolation, Streem started writing them to discuss what they saw as the military’s most pressing technical challenges. Answers clustered around reliance on GPS. Entire arsenals of American military equipment, he heard, depended on a satellite navigation system that a sophisticated enemy could disable. The more he learned, the more his ideas clicked into place: I may know how to solve these people’s problems, he thought.

His idea was straightforward. He would program autopilot software to process visual information from multiple cameras, compare it with onboard three-dimensional terrain maps, then triangulate to fix a weapon’s location. Called a visual positioning system, or V.P.S., the software could be loaded onto any number of flying platforms, equipping them to navigate over terrain with no satellite link at all. It could not be jammed or spoofed, because it would neither emit nor receive a signal.

Image

A close-up infrared photo of two rows of similar devices sitting on shelves in a dark room.

V.P.S. units designed by Vermeer sitting in storage. The system uses cameras to analyze an environment and match it to 3-D maps, determining precise locations even when satellite signals are jammed or unavailable.Credit...Finbarr O'Reilly for The New York Times

In early 2024, Streem rode a train from Warsaw to Kyiv and messaged Ukrainian officials on LinkedIn, introducing himself and his work. Soon he was invited to the Cabinet of Ministers building, where he made his way past the sandbagged entrance, attended an impromptu birthday party for a government employee and was led into a room with an official who wanted to put Vermeer’s modules on deep-strike drones. The building’s power was out. The office was dark. Ukraine was short on money, soldiers and time. The official did not mince words. “He essentially had a big map of Russia and Ukraine behind him, and immediately he started telling me about targets we’re going to hit,” Streem said.

Streem had talked his way into Ukraine’s war. Over the next year, he met drone manufactures around the country, tested prototypes on various drones and released the VPS-212, a roughly one-pound box with two cameras and a minicomputer. Assembled in an office beside a bagel shop in Brooklyn, the module can fix its location at speeds up to 218 m.p.h. — not fast enough for a proper cruise missile, but sufficient for most deep-strike drones. By summer 2025, Vermeer’s technicians were helping soldiers attach them to drones flown by several units that attack strategic targets in Russia. For security reasons, Vermeer does not publicly discuss specific strikes, but Streem and a Vermeer employee in Ukraine said V.P.S. modules had guided drones to verified hits.

With these results, Vermeer emerged as a winner in the scrum for contracts and funding, raising $12 million in a recent round that included the venture capital firm Draper Associates. In 2025, it attracted renewed attention from the Pentagon, which proposed attaching Vermeer’s V.P.S. to new fleets of deep-strike drones of its own. The U.S. Air Force also contracted the company to develop a similar system that would mount atop drones to look skyward and navigate celestially, like an A.I.-enabled sextant for precision flight above clouds or over water.

By then, Streem was immersed in his new line of work. Late in 2025, he sent me a tongue-in-cheek text about his passage from filmmaker to manufacturer of self-navigating camera modules that steer high-explosive payloads into Russia. “Stream reclined in his chair, vapor from his vape pen curling lazily between his fingers, as if the war beyond the walls were a distant rumor rather than the air he breathed,” the text read. The prose, he said, had been generated by A.I. It misspelled his name.

The Great Convergence

Few people could be as conversant in the promises and the perils of A.I.-enhanced weapons as Eric Schmidt, who served as chairman of the U.S. Defense Innovation Board from 2016 to 2020 and led the bipartisan National Security Commission on Artificial Intelligence from 2018 to 2021. Since working with Ukraine, he has been mostly tight-lipped about his wartime ventures, which have operated under multiple names, including White Stork, Project Eagle and Swift Beat. Through a public-relations firm, he declined to respond to multiple requests for comment for this article. But in a talk at Stanford in 2024 he called himself “a licensed arms dealer.” And by that fall, his operation had hired former Ukrainian soldiers to meet with active drone teams near Kyiv and train them on semiautonomous weapons to take to the front.

Ukrainian units vary widely in quality, and Schmidt’s team appears to have chosen carefully. Those collaborating with his operation are among Ukraine’s best managed, with reputations for innovation, battlefield savvy and sustained success against much larger Russian forces. Among them are the Khartia Brigade, which formed in 2022 to defend Kharkiv, Ukraine’s second largest city, and grew into a disciplined, tech-centric battlefield force. With an extensive fleet of aerial and ground drones, modern command posts and an ethos of data-driven decision-making, Khartia operates from hiding in the city and countryside. Its officers track Russian actions with networked ground cameras and sensors on the battlefield and reconnaissance drones in the air. The raw information is run through software producing outputs that resemble, its analysts say, “Moneyball” goes to war.

“We collect a lot of statistics and data,” said Col. Daniel Kitone, the brigade commander. “In conditions where resources are limited, we are providing for efficiency, and sometimes statistics give interesting answers that drive operations.” Rapid processing of multiple forms of surveillance data, the brigade’s officers say, can illuminate patterns, like recurring times when Russian troops resupply positions. They then use these insights to synchronize strike-drone flights with anticipated Russian movement, hoping to catch targets in the open.

Image

Two men talking closely in an improvised war command center with computers, maps, screens, clocks, and other people working at desks around them.

Khartia Brigade soldiers on shift in an underground command center in northeastern Ukraine.Credit...Finbarr O'Reilly for The New York Times

Bumblebee’s combat trials began with multiple brigades last winter. One test pilot, who flew the drone near Kupiansk in December 2024, said his team put the new quadcopters through progressively harder tests against Russian positions and vehicles, with the manufacturer’s engineers on call for technical assistance. “During this whole time, the product constantly improved,” the pilot said. “We constantly provided the developers with feedback: what works, what really does not work, which features are useful and which are not.” As with most A.I. products, more data can lead to smarter software. During Bumblebee’s quiet rollout, mission data from combat flights was logged and analyzed, several pilots said. Developers then pushed software updates to brigades that drone teams could download remotely.

One early attack hit the entrance to a root cellar where Russian soldiers sheltered. As Bumblebees evolved, pilots flew them further distances and against moving targets. In January 2025, an autonomous Bumblebee attack stopped a Russian logistics truck as it drove behind enemy lines, the test pilot said. In April, after more updates, Bumblebees flew autonomously against a Russian armored vehicle driving with a jammer. The vehicle was so covered with anti-drone protective measures that it resembled a porcupine. It absorbed the first strike and kept moving. A second Bumblebee hit immobilized it. This amounted to a milestone: A vehicle with all the protective steps Russia could muster had been removed from action.

Bumblebee’s entrance to the war had been shrouded in secrecy. But with strikes like that the hush could not last. Russia took note. Its soldiers outside Kupiansk and Kharkiv, where early Bumblebee sorties occurred, reported that these strange new drones seemed impervious to interference: They flew smoothly through jamming and kept racking up hits. The only sure way to stop them was to shoot them down.

Two days after the strike on the armored vehicle, the Center for Integrated Unmanned Solutions, a Russian drone-manufacturing business outside Moscow, issued findings from its analysis of a downed Bumblebee recovered near the front. It nicknamed the quadcopter Marsianin, Russian for “Martian,” based on the assumption that the prototypes descended from NASA’s Ingenuity program, which developed a small autonomous helicopter that flew on Mars. The report’s author declared that Ukraine had fielded an A.I.-enhanced drone capable of “operating in total radio silence” while flying complex routes and maneuvers “completely independent of navigation systems” — or even a human pilot.

The following month, the Novorossiya Assistance Coordination Center, a Russian ultranationalist organization that provides training and equipment to Russian soldiers, published a second analysis, a 49-page report loaded with warnings. Through open-source sleuthing, its author had found a photo of a Bumblebee posted by a Reddit user who in late 2024 obtained a broken specimen from garbage discarded at a Michigan National Guard facility. With that evidence, the author, Aleksandr Lyubimov, who organizes combat-drone exhibitions in Russia, echoed the first analyst’s suspicion that the Bumblebee had some connection to the United States. He noted that it “poses a serious threat” and asserted, accurately, that “its current use does not yet reveal its full capabilities and is most likely of a combat testing nature.” Nonetheless, he added, “there are no effective countermeasures against it, and none are expected in the near future.”

The Bumblebee’s resistance to jamming, according to a sales pamphlet in limited circulation in Ukraine, was tied in part to “redundant comms,” radio-wave frequency hopping and navigation by visual inertial odometry — technical solutions to signal jamming. But what made the weapon remarkable was not any one autonomous feature. It was the convergence of several.

According to the pamphlet, which claimed an “over 70-percent direct-hit rate via autonomous terminal guidance,” Schmidt’s quadcopters are also capable of autonomous target recognition. By overlaying bright green squares on items of interest appearing in video feeds, pilots said, Bumblebee’s software highlights potential targets, including foot soldiers, bunkers, vehicles and other aerial drones, often before human pilots can spot them. The combination of features, they said, result in an autonomous attack capability more robust and reliable than others available to date.

Bumblebees can also be controlled over the internet, which Lipa did in the strike in Borysivka. This keeps pilots away from the front, and from many weapons that might counterattack. In theory, as long as a Bumblebee’s ground station maintains a stable Wi-Fi or broadband connection, a pilot can operate it from almost anywhere — a capability demonstrated this past summer when Schmidt visited Kyiv and observed a Khartia team flying a Bumblebee released by a ground crew outside Kharkiv. According to a review of footage and people familiar with the mission, the drone passed over the lines and hit a four-wheel-drive Russian military van, known as a bukhanka — from roughly 300 miles away.

A Billionaire’s Growing Fleet

The Bumblebee is not a one-off project. It’s part of an experimental pack. Schmidt’s operation has also supplied Ukrainian units with a medium-range fixed-wing strike drone with a two-meter wingspan, marketed under the name Hornet, according to another sales pamphlet from early 2025. Like Bumblebee, it has A.I.-powered target recognition and terminal-attack guidance, along with jam-resistant communication and navigation systems. The pamphlet advertised an 11-pound payload, a cruise speed of 62 m.p.h. and range exceeding 90 miles. “Our A.I.-powered platform processes battlefield data in real time, adapting to changing conditions without human intervention,” the pamphlet says. “Neutralize more targets at a fraction of legacy system costs. Deploy at scale to achieve overwhelming force multiplication against sophisticated threats.” The pamphlet claimed a “future monthly production” of more than 6,000 units.

Schmidt has also become an ally in Ukraine’s defense against Shaheds, the Iranian-designed long-range drones that pummel Ukrainian cities almost nightly. In July 2025, he appeared with Ukraine’s president, Volodymyr Zelensky, to announce a strategic partnership to provide Ukraine with A.I.-powered drones, with an emphasis on an interceptor system known as Merops. Ukraine was developing its own human-piloted anti-Shahed drones, and had some early successes. But Schmidt’s weapons, Ukrainian officials said, were more effective. Mykhailo Fedorov, Ukraine’s first deputy prime minister, showed videos of them striking Shaheds at high speed. Merops had a hit rate as high as 95 percent, he said. (After its combat trials in Ukraine, Merops is now being deployed on NATO’s eastern flank.)

Ukrainian pilots and officers say Schmidt’s products have shown more promise than most, though reviews for Bumblebee have not been universally glowing. The weapon has no night cameras, limiting flights to daytime. Lyubimov, the Russian technical analyst, described the drone as inadequately weather resistant. “The design exhibits many ‘childhood flaws,’” he wrote. But Schmidt’s technicians are available on the Signal app and responsive to suggestions, said Serhii, a Bumblebee combat pilot and Khartia’s chief technical consultant. The first Bumblebees were difficult to operate, but through cycles of feedback and updates they became better. “In the beginning it didn’t fly without a professional pilot,” he said. “Now it can fly with a newbie.” Serhii said he had tested 15 semiautonomous terminal-attack drones from different manufacturers, and Bumblebee was the best. A new generation of Bumblebees is in the works, he added, with a stronger airframe and night optics.

Hardware upgrades would be welcome. But the captain who integrates new tech into Khartia’s operations said hardware was not the secret ingredient behind Bumblebee’s performance. It is the firmware and the flight software, BeeQGroundControl, that separates the drone from others. “Eric Schmidt made a very innovative drone,” he said, adding that Bumblebee is one of the only drones in Ukraine that “is ready out of the box.” Teams simply add an explosive charge and begin work.

In one coordinated kamikaze attack, Serhii said, three Bumblebees and a standard F.P.V. drone destroyed a 152-millimeter howitzer inside Russia that was protected under a bunker roofed with logs. The first Bumblebee dove into camouflage netting and set it afire; the second breached the roof. Then the standard F.P.V. plunged inside and the final Bumblebee hovered overhead and scattered 10 small anti-personnel land mines around the site. The strikes were timed two minutes or less apart. Khartia has repeated the tactic. “There have been many similar attacks,” Serhii said.

Bumblebees are so valuable, he added, that teams flying them are assigned only to important missions — principally hunts behind Russian lines for artillery and logistics vehicles, and to carry relay transmitters that extend other drones’ ranges. Other units deliver packets of blood to bunkers that medics use to stabilize wounded soldiers awaiting evacuation, an officer supervising strike-drone teams said.

For all the ways that Bumblebees have brought together multiple autonomous features, Schmidt’s engineers, Ukrainians said, have not programmed weapons for full autonomy. Like NORDA Dynamics’s Underdog, Bumblebees require a human to designate targets before attack. “My opinion is that we need to leave the final judgment to the human being,” one Bumblebee test pilot said. Schmidt has agreed. “There’s always this worry about the ‘Dr. Strangelove’ situation, where you have an automatic weapon which makes the decision on its own,” he said in a televised 2024 appearance. “That would be terrible.”

Image

In storeroom cluttered with drone parts, a man with a ponytail and tattooed arms sits at a table talking up to a man with a shaved head and a T-shirt standing next him.

A soldier known by the call sign Lipa (left) from Ukraine’s Khartia Brigade, updating the software on Bumblebee attack drones.Credit...Finbarr O'Reilly for The New York Times

In “Dr. Strangelove,” the 1964 Stanley Kubrick movie, a Soviet doomsday device detects a rogue American attack and automatically launches a nuclear response, effectively exterminating humankind. Schmidt’s caution aligns with Pentagon policy, which embraces keeping “humans in the loop,” at least in an aspirational spirit. Its 2023 guidance for autonomy in weapons systems, known as Directive 3000.09, requires “appropriate levels of human judgment over the use of force,” and senior-level official review of all autonomous systems in development or to be deployed. But the directive offers no clarity about what, exactly, constitutes “appropriate levels of human judgment.”

Further, no global consensus or convention exists for these ideas or other forms of design constraint. The arms race is afoot without mutually accepted guardrails. Schmidt has prefaced his support for keeping people in the loop by pointing out that “Russia and China do not have this doctrine,” suggesting that weapons that kill outside of human supervision could find their way to the battlefield no matter anyone’s positions or desires.

At the Edge of Total Autonomy

The compete-or-die paradigm has brought semiautonomous weapons into new territory fast. X-Drone, for example, merges multiple forms of autonomous tech onto long-range drones. Its software helps navigate the weapons to a distant area, like a seaport, then uses computer vision to identify and attack specific targets — warships, fuel-storage tanks, parked aircraft. “You fly 500 kilometers and you miss a target by a little bit, and your mission is wasted,” the company’s founder said. “Now we train on an oil tank and it hits.”

Drones with X-Drone’s software have also hit trains carrying fuel and expensive Russian air-defense radar systems, clearing routes for more drones to follow, according to the founder. Andrii, a pilot of medium-range strike drones, said he flew more than 100 A.I.-enhanced missions in 2025 with the company’s software. His work involved flying to areas where reconnaissance flights detected a valuable target, then passing control to the software for terminal attack. On a sortie this fall, he said, the drone struck a mobile air-defense system.

By late 2025, the founder said, X-Drone had provided Ukrainian units with more than 30,000 A.I.-enhanced weapons. The company is experimenting with more complex capabilities, including loading facial recognition technology into drones that could identify then kill specific people, and coupling flight-control and navigation software with large language models, or L.L.M.s, “so the drone becomes an agent,” he said. “You can literally speak to the drone, like: ‘Fly to right, 100 meters. What do we see? Do you see a window? Fly inside the window.’”

Using armed quadcopters to peer into windows or enter structures is not new. The practice is common in shattered neighborhoods at the front. Videos posted on Telegram by Ukrainian units show piloted quadcopters performing exactly such feats, slipping into occupied buildings to kill Russian soldiers within. Adding a role for A.I. on such flights could allow this particular form of violence to expand.

With A.I.-enhanced weapons, the ethical distinction between two broadly different types of strike — a drone selecting a large inanimate object for attack and a drone autonomously hunting human beings — is large. But the technical difference is smaller, and X-Drone has already crept from the inanimate to the human. X-Drone has developed A.I.-enhanced quadcopters that, its founder says, can attack Russian soldiers with or without a human in the loop. He said the software allows remote human pilots to abort auto-selected attacks. But when communications fail, human control can become impossible. In those cases, he said, the drones could hunt alone. Whether this is occurring yet is not clear.

As semiautonomous weapons train to pursue people, Asaro, of the Stop Killer Robots Campaign, warned that entering this uncharted moral frontier was deeply worrisome, because computer programs applying rules to patterns of sensor data should not determine people’s fates. “These things are going to decide who lives and who dies without any sort of access to morality,” he said, and were the essence of digital dehumanization. “They are amoral. Machines cannot fathom the distinction between inanimate objects and people.” Ukrainians involved often agree. But whether fighting in brigades or coding in company offices, as members of a population unwillingly greased in blood, they speak of ample motivation to continue their work, and of little time for regret.

X-Drone’s founder had not intended to become an arms manufacturer; it is a role he neither sought nor foresaw. Born in Soviet Russia, he worked a long career in the United States and was arranging tech deals in Kyiv when the Kremlin’s forces invaded in 2022. A physicist by training, he joined a neighborhood defense unit, which gave him a rifle. When the Russian vanguard breached the capital near his home, he turned out to meet it, then recorded videos of the bloodied bodies of soldiers his group killed. Over a meal in Kyiv in 2025, he showed one of the videos almost incredulously. “All of my career I worked in Silicon Valley and on Wall Street,” he said, “and one day I am shooting Russians near my house?”

Now the front was only a few hours’ drive away, and Russia’s missiles and long-range drones could kill anyone in Ukraine in their sleep on any night. He nodded toward a peaceful daytime street scene. “It feels normal, but it’s just not,” he said. “This is an illusion of normality.”

Wars can push everyday people to extreme positions, which for a physicist can take the form of pragmatic epiphany: With Ukraine’s defenses slowly yielding ground, and defeat meaning a return to life under Moscow’s boot, developing A.I.-enhanced drones was logical, obvious and necessary. Western militaries were far behind, he said, and Ukraine’s resistance was buying them time. Its innovations might save them, too. “Drones with A.I. are the big game-changer,” he said. “The whole military infrastructure previously is obsolete.”

‘They Should Have Stopped the War Early On’

On the range where Pavlo trained pilots on Sine Engineering’s tech, one student, Yurii, who commands an F.P.V. platoon in a frontline brigade, brushed aside philosophical discussion. He had participated in some of the war’s most prominent battles, including the incursion in 2024 into Kursk. When the full invasion began, he was a medical doctor in Western Europe. Now he killed Russian soldiers, a career deviation he insisted contained no breach of his Hippocratic oath. While practicing medicine, he prescribed antibiotics to kill microbes to save patients and stop contagions’ spread. Strikes on Russian soldiers, he offered, amounted to a similar public service. “Now we are killing bugs, too,” he said. “They’re just larger bugs.”

A front-row participant in Ukraine’s adoption of drone warfare, Yurii had seen his share of new weapons. In his view, A.I.-powered drones were inevitable. “Any large-scale war, it delivers demons,” he said. “It unleashes something powerful and it accelerates developments which otherwise would have taken decades.” World War I saw rollouts of combat aviation, tanks and artillery, alongside widespread use of chlorine, phosgene and sulfur mustard. World War II ended after the United States destroyed Hiroshima and Nagasaki with nuclear bombs. “Who knows what this war is going to unleash,” Yurii said. “If the international community is concerned about this, then they should have stopped the war early on.”

Weapons as transformative as the combat drones proliferating in Ukraine are historically uncommon. They enforce tectonic shifts in military tactics, budgets, doctrines and cultures. Organizations that adapt to the new capabilities and dangers can thrive; those that do not suffer battlefield humiliations and miseries for their rank and file. The rapid evolution of drones, now accelerating through integration with autonomy, is a moment potentially analogous to the rise of the machine gun during the Russo-Japanese War.

That new weapon’s power was demonstrated during the 1904 siege of Port Arthur. Tsarist forces were thick with peasants and drunkards. Japan’s imperial ranks were motivated and well trained. But when they attacked fortified Russian positions in massed infantry assaults, they rushed into machine guns, previously unseen in state-on-state conventional war, that shattered their lines with sweeping bursts of fire. Western military attachés were present to observe events that darkened dirt red. And yet most nations failed to take notice. European armies, oblivious to what machine guns would mean for their soldiers’ fates, continued to feed cavalry horses and to preach the glories of the open-ground charge. A decade later, hapless generals poured away young lives on the Western Front, out of step with the technology of their time.

The state of Western military readiness today galvanizes Deborah Fairlamb, a founding partner of Green Flag Ventures, a Delaware-registered venture capital fund investing in Ukrainian defense-tech start-ups. Even before autonomous drones appeared, she said, the extraordinary proliferation of unmanned weapons outran nations’ defensive abilities. “Most people in the West do not understand what is happening here,” she said, and the gap could mean stinging defeat and enormous loss of life.

Image

A freshly dug grave in the foreground of a cemetery covered with flowers and blue-and-yellow Ukrainian flags.

The military section at Cemetery No. 18 in Kharkiv, one of many burial grounds for Ukrainian soldiers killed in its war with Russia.Credit...Finbarr O'Reilly for The New York Times

Fairlamb lives in Kyiv. Her alarm sounded after American veterans of Afghanistan and Iraq visited Ukraine to prospect for business or fight as volunteers. They entered a war in which new tech has caused almost unfathomable carnage. Russia and Ukraine have suffered well over a million combined casualties in less than four years, with most wounds caused by drones. “They come back from the front, like, shaken,” she said, and they share a refrain: “My team would not last for 48 hours out there.” With A.I.-enhanced drones joining the action, Fairlamb described the need to boost A.I.-arms development as no less than existential, prompting her to approach embassies and arms manufacturers with urgency. “It really and truly is about making people understand how dramatically different this technology is,” she said. “And how unbelievably unprepared the United States is.”

For Schmidt, multiple motivations appear to overlap. At Stanford in 2024, he said he entered drone manufacturing after seeing Russian tanks destroy apartments with elderly women and children inside. His entrance to the war earned him genuine gratitude from Ukrainians, whether they fly Bumblebees or are at decreased risk every time a Merops interceptor hits a Shahed. With a year-plus of combat shock-testing of its products, his operation is also well positioned for potential profits as nations reassess and update their arsenals in light of lessons learned from Ukraine.

He has framed his movement into the A.I. arms sector as implicitly humanitarian. “Now you sit there and you go, Why would a good liberal like me do that?” he said at Stanford. “The answer is that the whole theory of armies is tanks, artilleries and mortars, and we can eliminate all of them and we can make the penalty for invading a country, at least by land, essentially be impossible.” A.I.-powered weapons, he suggested, could end this kind of warfare.

This is a prediction with precedent from when machines guns were poised to upend ground combat as people knew it. In 1877, Richard Gatling, inventor of the Gatling gun, a prominent forerunner of automatic fire, proposed that as an efficient multiplier of lethal violence his weapon might spare people the horrors of war. “It occurred to me,” he wrote, that “if I could invent a machine — a gun — which could by rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a great extent, supersede the necessity of large armies.”

Maybe the future will prove Eric Schmidt’s vision right. Whatever is coming will reveal itself in time. History shows Gatling was spectacularly wrong.


Yurii Shyvala contributed reporting.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

No, the New York GOP Shouldn’t Take the Side of Unions

1 Share
  • Republican decline: New York GOP influence in Albany fell sharply after the 2018 collapse of the Independent Democratic Conference, leaving them unable to prevent Democratic supermajorities.
  • Union alignment issue: Some state Republicans are cozying up to public-sector unions, making them harder to distinguish from Democrats and undermining reform efforts.
  • Messaging vacuum: Unlike candidates with clear platforms, today’s state GOP lacks a coherent identity, leading to voter confusion about what the party stands for.
  • Fiscal contradiction: Albany Republicans repeatedly backed large budget increases, resulting in roughly $20 billion more annual spending compared to a 2018 baseline adjusted for inflation.
  • Medicaid spending pressure: GOP lawmakers joined unions like 1199 SEIU in demanding higher Medicaid payments, despite New York spending nearly $100 billion annually—about 80 percent above the national per capita average.
  • Train staffing stance: Bruce Blakeman and lawmakers supported union-backed legislation requiring two-person subway crews, even though one-person operations are standard elsewhere and already used selectively in New York.
  • Union resistance to reform: Long outdated MTA work rules hinder innovation and efficiency, but the TWU and similar unions strongly resist change to protect their interests.
  • Opportunity for contrast: Republicans could gain ground by confronting union-driven costs and focusing on taxpayer value, yet accommodating those interests keeps them stuck without a compelling message.

[

](https://substackcdn.com/image/fetch/$s_!j1QU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd4356c-10d7-4a04-bb41-3f7e753fd734_3000x2000.jpeg)

Courtesy Lev Radin/Pacific Press/LightRocket/Getty.

It is no secret that New York State Republicans are struggling. While there have been occasional glimmers of hope—most notably former Rep. Lee Zeldin’s relatively close loss to Gov. Kathy Hochul in 2022—these have been the exception rather than the rule.

The party’s fortunes turned decisively in 2018, when the collapse of the Independent Democratic Conference—a group of moderate Democrats in the state legislature—ended a governing arrangement that had allowed Republicans to partner with the center Left to keep the far Left sidelined. In the aftermath, Republicans have seen their influence in Albany sharply curtailed. They are now barely able to prevent Democrats from securing a supermajority in the state senate.

The state GOP could respond to this decline by making themselves a serious alternative to the Democrats, one capable of addressing New York’s most pressing problems. Instead, some state Republicans are aligning themselves with the same interest groups that have long driven those problems in the first place, most notably public-sector unions.

These Republicans may believe that cozying up to unions will make them more viable. In reality, it’ll just make them less distinguishable from Democrats and less able to actually make things better for everyday New Yorkers.

One of the New York Republican party’s core weaknesses is the absence of a well-defined platform. Regardless of the merits of his proposals, voters knew exactly what Zohran Mamdani stood for. He returned to affordability relentlessly, no matter the question, and in doing so established a coherent political identity. It didn’t matter that large portions of his program are fiscally or legally implausible—when voters went to the polls, they knew what was on offer.

By contrast, it is difficult to say what today’s New York Republican Party is actually for. Zeldin had a singular focus—crime—that showed the power of staying on message. Three years on, however, if you were to ask voters on the street what the state GOP is about, you’d be unlikely to hear a consistent answer, much less a positive one.

That confusion is not an accident. It reflects the party’s voting record and behavior in Albany, and specifically its abandonment of fiscal responsibility.

As my Manhattan Institute colleague Ken Girardin has repeatedly pointed out, Republican lawmakers have backed massive budget increases over the past several years. The state now spends roughly $20 billion more per year, in inflation-adjusted terms, than it would have if the 2018 budget had simply kept pace with inflation. Blame Medicaid and school aid for that growth, both relentlessly pushed upward by powerful unions like 1199 SEIU and the United Federation of Teachers.

A party that routinely votes for the largest drivers of spending growth cannot credibly claim to be the party of fiscal restraint. Nor can it say that it is delivering better value for taxpayers when it largely goes along with Democrats’ unprincipled spending.

New York’s GOP has increasingly been coopted by the same union forces that prevent Democrats from pursuing meaningful reforms. Girardin notes, for example, that last year Republican Senator Patrick Gallivan joined a chorus of Democrats, conducted by the 1199 SEIU healthcare workers union, to demand “Medicaid equity.” In reality, the ask is for higher payments for hospitals and other medical providers—despite the state’s spending about $100 billion a year on Medicaid, nearly 80 percent above the national average per capita, and the most of any state.

More recently, presumptive GOP gubernatorial nominee Bruce Blakeman voiced his disapproval of Hochul’s vetoing legislation that would have required two-person subway operation. The union-backed bill passed unanimously in the Assembly, and all but two Republican senators voted for it.

For context, most of the country and the world operate trains with one person: a driver, sans conductor. In a Marron Institute analysis of 400 train lines around the world, only 6 percent operated with two-person crews. The MTA utilizes one-person operations for subway shuttles and shorter lines on off hours.

If signed, this legislation would have effectively taken the issue off the table when the MTA negotiates its next collective bargaining agreement with the Transport Workers Union. Blakeman wrote that he stood “shoulder to shoulder” with TWU Local 100, arguing that conductors are essential for public safety.

Though the TWU and other advocates frequently claim that conductors are necessary for public safety, conductors aren’t required to assist in threatening situations. That makes sense—train operators shouldn’t need to put themselves potentially in harm’s way. But it seriously undermines the claim that conductors play a vital public-safety role.

A TWU vice president recently acknowledged as much on X, posting that conductors are only required to report incidents to the MTA and NYPD. “When we take action we get written up for dismissal or other negative actions by the MTA,” he wrote.

It’s not unreasonable to oppose one-person train operation, which—despite being the global and national standard—must be considered carefully given the immense complexity and intensive use of the MTA’s transit system. But it’s another thing to support banning the MTA from using it at all. One-person operations work well enough in the few instances the MTA uses it today—why upset that apple cart?

It’s also understandable that Republicans like Blakeman would want to peel away union rank-and-file members to their corner. But if they’re serious about governing differently than Democrats, they can’t cozy up to the forces that are making New York expensive and slowing down innovation.

Long outdated work rules are endemic in the MTA, for example. They stand in the way of managerial reforms that would improve fiscal efficiency and the quality of riders’ experience. Don’t expect the TWU to give those up without a big fight.

Friendliness to unions also makes little sense politically. Even if Blakeman and co. manage to peel off some rank-and-file support, New York’s union leadership (putting aside police unions) is unlikely to break with Democrats—particularly given the current national political climate.

With New York’s Democrats firmly in league with reform-thwarting unions, Republicans have an opportunity to side with the public’s well-being against the special interests. A serious focus on greater affordability through results for tax dollars could form the basis of a winning message. But that’s only if the party is willing to confront, not accommodate, those interests.

If Republicans are unwilling to do so, the party will remain stuck where it is now: unable to articulate what it stands for, and even less able to convince voters to give them a chance.

Share

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Warren Buffett hands over Berkshire Hathaway’s reins to Greg Abel

1 Share
  • Leadership transition: Warren Buffett, after six decades, has passed Berkshire Hathaway CEO duties to Greg Abel, marking a significant succession.
  • Abel’s background: The 63-year-old vice-chair, a low-profile executive from Berkshire’s utilities division, now leads the conglomerate.
  • Buffett’s legacy: Buffett transformed Berkshire from a textile mill into a $1.1tn conglomerate spanning railroads, utilities, and insurance.
  • Capital position: Berkshire holds more than $350bn in cash and short-term Treasuries plus $283bn in publicly traded stock while generating almost $900mn weekly from its businesses.
  • Investment philosophy: Abel has pledged to maintain Buffett’s focus on businesses with significant cash flows and a long-term horizon, including thorough economic forecasts.
  • Communication expectations: Investors seek clarity on whether Abel will introduce quarterly earnings calls, additional qualitative unit performance insights, or reshape the annual shareholder letter.
  • Technology bets: Market watchers wonder if Abel influenced the $4.3bn Alphabet investment, which could signal Berkshire’s stance on fast-growing tech opportunities.
  • Organizational moves: Abel is appointing a new CFO, first general counsel, and promoting NetJets’ CEO to oversee hundreds of consumer and service businesses to bolster a lean Omaha office.

Warren Buffett has handed over Berkshire Hathaway’s reins after six decades at the helm, in one of the most consequential corporate successions of a generation.

Berkshire vice-chair Greg Abel, 63, took charge as chief executive on Thursday, leaving a low-profile protégé of one of history’s savviest investors to lead the sprawling company.

Buffett, 95, transformed Berkshire from a struggling New England textile mill into a $1.1tn financial behemoth that spans railroads, utilities and insurance operations.

With Buffett promising to go “quiet”, Wall Street is watching to see how Abel will deploy Berkshire’s vast portfolio.

The company has more than $350bn of cash and short-term Treasuries and $283bn in publicly traded stock. Investors will also scrutinise how Abel allocates the almost $900mn of cash that flows in from its businesses each week.

“He’s inheriting the most privileged place in American business,” said Christopher Davis, a partner at Berkshire investor Hudson Value Partners. “Buffett was not only a great investor but someone people . . . looked up to for doing the right thing and dealing fairly and that gave Berkshire some pretty broad latitude.”

Column chart of Cash and cash equivalents ($bn) showing Berkshire's cash pile has swelled as it has sold out of stocks

Investors have more questions than answers as they wait to see how Abel, a longtime Berkshire executive, begins to leave his mark. They are keen to see whether he will hew to Buffett’s value investing philosophy, which in recent years has meant Berkshire has passed on several big-ticket deals and avoided many flashy tech investments.

Some analysts and investors have also pressed Buffett to launch quarterly earnings calls or provide better qualitative insight into how individual units are performing — a decision that now rests with Abel.

The new CEO has only seldom made himself available to the press and investors.

Shareholders have instead drawn on his comments at Berkshire’s annual meetings in Omaha, Nebraska, for clues on how he will size up investment opportunities and the business attributes he will be looking for when going elephant hunting — Buffett’s term for the group’s mammoth corporate takeovers.

Abel, a Canadian who rose up through Berkshire’s utilities division, has signalled the company’s investment philosophy will not change when he takes over.

Last year he told investors he would continue to target businesses that generate significant cash flows and the company’s long-term investment horizon would remain intact.

Greg Abel smiles while posing with a laughing shareholder at the Berkshire Hathaway annual shareholders' meeting.

Investors are waiting to see if Greg Abel, centre, will continue to use Warren Buffett’s annual shareholder letter in February to lay out his vision for Berkshire © Brendan McDermid/Reuters

He added Berkshire would still need to have a view on the economic prospects of a company in 10 or 20 years before investing, whether buying a business outright or when purchasing a minority stake.

“It is really the investment philosophy and how Warren and the team have allocated capital for the past 60 years,” Abel said last May. “It will not change and it’s the approach we’ll take as we go forward.”

He declined to comment for this story.

Each February investors and corporate executives pore over Buffett’s annual shareholder letter. Buffett has said he will not be writing the next missive, and investors are waiting to see if Abel will use the annual letter to lay out his vision for Berkshire.

Investors are particularly focused on whether Abel was involved in Berkshire’s $4.3bn investment in Google owner Alphabet in late 2025, or if it was Buffett who signed off on the wager.

If Abel was a key driver of the investment, then investors could view that as a sign that Berkshire is open to making big bets on fast-growing technology companies.

“What is it in the capital allocation model that just made them decide to buy Alphabet now?” asked Christopher Rossbach, chief investment officer of J Stern & Co, a longtime Berkshire investor.

He added: “What makes Berkshire special is the public equity portfolio. And the question is: is it really going to be Greg managing that in the way that Warren did?”

Line chart of Cumulative total return since January 2020 (%) showing Berkshire Hathaway has eclipsed the S&P 500 over the past 25 years

Abel has begun to leave his stamp on some parts of Berkshire’s sprawling empire. He will be joined by a new chief financial officer next year as well as the company’s first general counsel.

He has also promoted the CEO of fractional jet ownership company NetJets as president of 32 of Berkshire’s consumer, retail and service businesses — a division that generated more than $40bn of revenues in the first nine months of 2025.

Investors such as Darren Pollock, a portfolio manager at Cheviot, said they saw the appointments as a step by Abel to fill in gaps at Berkshire’s slim home office in Omaha, which at the end of 2024 employed just 27 people.

“This paves the way for Greg’s larger priority — to win the confidence of shareholders who can’t help but be less comfortable with Abel than they are [with] the legendary Buffett,” Pollock said.

“Now, with the spotlight on him, Abel needs to show what people within Berkshire know him to be: a steady and razor-sharp leader.”

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Meta created ‘playbook’ to fend off pressure to crack down on scammers, documents show | Reuters

1 Share
  • Regulatory Pressure: Japanese regulators targeted a surge of scam ads on Meta’s platforms, prompting Meta to fear mandatory advertiser identity verification.
  • Ad Library Management: Meta analyzed keyword searches by Japanese regulators and deleted problematic ads to improve perceived outcomes and reduce scam discoverability.
  • Global Playbook: The tactic of mimicking regulator searches was expanded into a global strategy to delay or weaken advertiser verification in markets worldwide.
  • Revenue Concerns: Documents show Meta estimates universal verification would cost about $2 billion to implement and could cut up to 4.8% of total revenue by blocking unverified advertisers.
  • Reactive Stance: Meta officially maintains a “reactive only” approach, resisting verification unless law requires it and prioritizing voluntary programs instead.
  • Verification Impact: Taiwan’s mandatory advertiser verification reduced scam ads significantly, but Meta’s algorithms reroute disallowed ads to other countries, shifting harm.
  • Industry Comparison: Meta notes Google’s universal verification reset regulatory expectations, yet refuses comparable investments due to costs and potential revenue loss.
  • Risk Ranking: Fraudulent advertising earned the highest internal risk rating, with potential liability in Europe and Britain estimated at up to $9.3 billion if users’ losses became Meta’s responsibility.

SAN FRANCISCO - Japanese regulators last year were upset by a flood of ads for obvious scams on Facebook and Instagram. The scams ranged from fraudulent investment schemes to fake celebrity product endorsements created by artificial intelligence.

Meta, owner of the two social media platforms, feared Japan would soon force it to verify the identity of all its advertisers, internal documents reviewed by Reuters show. The step would likely reduce fraud but also cost the company revenue.

To head off that threat, Meta launched an enforcement blitz to reduce the volume of offending ads. But it also sought to make problematic ads less “discoverable” for Japanese regulators, the documents show.

The documents are part of an internal cache of materials from the past four years in which Meta employees assessed the fast-growing level of fraudulent advertising across its platforms worldwide. Drawn from multiple sources and authored by employees in departments including finance, legal, public policy and safety, the documents also reveal ways that Meta, to protect billions of dollars in ad revenue, has resisted efforts by governments to crack down.

In this case, Meta’s remedy hinged on its “Ad Library,” a publicly searchable database where users can look up Facebook and Instagram ads using keywords. Meta built the library as a transparency tool, and the company realized Japanese regulators were searching it as a “simple test” of “Meta’s effectiveness at tackling scams,” one document noted.

To perform better on that test, Meta staffers found a way to manage what they called the “prevalence perception” of scam ads returned by Ad Library searches, the documents show. First, they identified the top keywords and celebrity names that Japanese Ad Library users employed to find the fraud ads. Then they ran identical searches repeatedly, deleting ads that appeared fraudulent from the library and Meta’s platforms.

Instead of telling me an accurate story about ads on Meta’s platforms, it now just tells me a story about Meta trying to give itself a good grade for regulators.

Sandeep Abraham, former Meta fraud investigator

The tactic successfully removed some fraudulent advertising of the sort that regulators would want to weed out. But it also served to make the search results that Meta believed regulators were viewing appear cleaner than they otherwise would have. The scrubbing, Meta teams explained in documents regarding their efforts to reduce scam discoverability, sought to make problematic content “not findable” for “regulators, investigators and journalists.”

Within a few months, they said in one memo after the effort, “we discovered less than 100 ads in the last week, hitting 0 for the last 4 days of the sprint.” The Japanese government also took note, the document added, citing an interview in which a prominent legislator lauded the improvement.

Meta has studied searches of its Ad Library and worked to reduce the "discoverability" of problematic advertising. Documents reviewed by Reuters, and highlighted here by the news agency, show internal discussions about the effort. REUTERS

“Fraudulent ads are already decreasing,” Takayuki Kobayashi, of the ruling Liberal Democratic Party, told a local media outlet. Kobayashi didn’t respond to a Reuters request for comment about the interview.

Japan didn’t mandate the verification and transparency rules Meta feared. The country’s Ministry of Internal Affairs and Communications declined to comment.

So successful was the search-result cleanup that Meta, the documents show, added the tactic to a “general global playbook” it has deployed against regulatory scrutiny in other markets, including the United States, Europe, India, Australia, Brazil and Thailand. The playbook, as it’s referred to in some of the documents, lays out Meta’s strategy to stall regulators and put off advertiser verification unless new laws leave them no choice.

The search scrubbing, said Sandeep Abraham, a former Meta fraud investigator who now co-runs a cybersecurity consultancy called Risky Business Solutions, amounts to “regulatory theater,” distorting the very transparency the Ad Library purports to provide. “Instead of telling me an accurate story about ads on Meta’s platforms, it now just tells me a story about Meta trying to give itself a good grade for regulators,” said Abraham, who left the company in 2023.

Meta spokesperson Andy Stone in a statement told Reuters there is nothing misleading about removing scam ads from the library. “To suggest otherwise is disingenuous,” Stone said.

By cleaning those ads from search results, the company is also removing them from its systems overall. “Meta teams regularly check the Ad Library to identify scam ads because when fewer scam ads show up there that means there are fewer scam ads on the platform,” Stone wrote.

Advertiser verification, he said, is only one among many measures the company uses to prevent scams. Verification is “not a silver bullet,” Stone wrote, adding that it “works best in concert with other, higher-impact tools.” He disputed that Meta has sought to stall or weaken regulations, and said that the company’s work with regulators is just part of its broader efforts to reduce scams.

Those efforts, Stone continued, have been successful, particularly considering the continuous maneuvers by scammers to get around measures to block them. “The job of chasing them down never ends,” he wrote. The company has set global scam reduction targets, Stone said, and in the past year has seen a 50% decline in user reports of scams. “We set a global baseline and aggressive targets to drive down scam activity in countries where it was greatest, all of which has led to an overall reduction in scams on platform.”

Meta’s internal documents cast new light on the central role played by fraudulent advertising in the social media giant’s business model – and the steps the company takes to safeguard that revenue. Reuters reported in November that scam ads Meta considers “high risk” generate as much as $7 billion in revenue for the company each year. This month, the news agency found that Meta tolerates rampant fraud from advertisers in China.

In response to Reuters’ coverage, two U.S. senators urged regulators at the Securities and Exchange Commission and the Federal Trade Commission to investigate and “pursue vigorous enforcement action where appropriate.” Citing Reuters reporting, the attorney general of the U.S. Virgin Islands also sued Meta this month for allegedly “knowingly and intentionally” exposing users of its platforms to “fraud and harm” and “profiting from scams.” Stone said Meta strongly disagrees with the lawsuit’s allegations.

In Brussels, where European authorities have also been focused on scams, a spokesperson for the European Commission told Reuters its regulators had recently asked Meta for details about its handling of fraudulent advertising. “The Commission has sent a formal request for information to Meta relating to scam ads and risks related to scam ads and how Meta manages these risks,” spokesperson Thomas Regnier wrote. “There are doubts about compliance.” He didn’t elaborate.

The documents reviewed by Reuters show that Meta assigned its handling of scams the top possible score in an internal ranking of regulatory, legal, reputational and financial risks in 2025. One internal analysis calculated that possible regulation in Europe and Britain that would make Meta liable for its users’ scam losses could cost the company as much as $9.3 billion.

EMPLOY A “REACTIVE ONLY” STANCE

One big push among regulators is to get Meta and other social media companies to adopt what is known as universal advertiser verification. The step requires all advertisers to pass an identity check by social media platforms before the platforms will accept their ads. Often, regulators request that some of an advertiser’s identity information also be viewable, allowing users to see whether an ad was posted locally or from the other side of the world.

Google in 2020 announced that it would gradually adopt universal verification, and said earlier this year it has now verified more than 90% of advertisers. Along with requiring verification in jurisdictions where it’s legally mandated, Meta offers to voluntarily verify some large advertisers and sells “Meta Verified” badges to others, combining identity checks with access to customer support staff.

Documents reviewed by Reuters say that 55% of Meta’s advertising revenue came from verified sources last year. Stone, the spokesperson, added that 70% of the company’s revenue now comes from advertisers it considers verified.

The internal company documents show that unverified advertisers are disproportionately responsible for harm on Meta’s platforms. One analysis from 2022 found that 70% of its newly active advertisers were promoting scams, illicit goods or “low quality” products. Stone said that Meta routinely disables such new accounts, “some on the very day that they’re created.”

Meta’s documents also show the company recognizes that universal verification would reduce scam activity. They indicate that Meta could implement the measure in any of the countries where it operates in less than six weeks, should it choose to do so.

But Meta has balked at the cost.

Despite reaping revenue of $164.5 billion last year, almost all of which came from advertising, Meta has decided not to spend the roughly $2 billion it estimates universal verification would cost, the documents show. In addition to that cost of implementation, staffers noted, Meta could ultimately lose up to 4.8% of its total revenue by blocking unverified advertisers.

I expected that the company would have continued to do more verification, and personally felt that was something that all major platforms should be doing.

Rob Leathern, a former senior director of product management at Facebook

Instead of adopting verification, Meta has decided to employ a “reactive only” stance, according to the documents. That means resisting efforts at regulation – through lobbying but also through measures like the scrubbing of Ad Library searches in Japan last year. The reactive stance also means accepting universal verification only if lawmakers mandate it.

So far, just a few markets, including Taiwan and Singapore, have done so.

Even then, the documents show, the financial costs to Meta have remained small. Meta’s own tests showed verification immediately reduced scam ads in those countries by as much as 29%. But much of the lost revenue was recouped because the same blocked ads continued to run in other markets.

If an unverified advertiser is blocked from showing ads in Taiwan, for example, Meta will show those ads more frequently to users elsewhere, creating a whack-a-mole dynamic in which scam ads prohibited in one jurisdiction pop up in another. In the case of blocked ads in Taiwan, “revenue was redistributed/rerouted to the remaining target countries,” one March 2025 document said, adding that consumer injury gets displaced, too. “This would go for harm as well,” the document noted.

Meta analyses found that even when verification blocked ads in one market, those same ads would still generate revenues for the company in other markets. Highlighting of internal document reviewed by Reuters. REUTERS

Meta’s documents show the company believes its efforts to defeat regulation are succeeding. In mid-2024, one strategy document called the prospect of being “required to verify all advertisers” worldwide a “black swan,” a term used to describe an improbable but catastrophic event. In the months afterwards, policy staffers boasted about stalling regulations in Europe, Singapore, Britain and elsewhere.

In July, one Meta lobbyist wrote colleagues after they thwarted stricter measures considered by financial regulators in Hong Kong against financial scams. To get ahead of the effort, staffers helped regulators draft a voluntary “anti-scam charter.” They coordinated with Google, which also signed the charter, to present a “united front,” the document says. “Through skillful negotiations with regulators,” the Meta lobbyist wrote, Hong Kong relaxed rules that would have forced verification of financial advertisers. “The finalised language does not introduce new commitments or require additional product development.”

Hong Kong regulators, the lobbyist added, “have shown huge appreciation for Meta’s leading participation.”

Meta staffers boasted about success slowing the push by authorities for advertiser verification. In one document, highlighted here by Reuters, Meta employees say their lobbying in Hong Kong thwarted "new commitments" in local regulations. REUTERS

A Google spokesperson said the company signed onto the charter because it believed it would benefit customers. Google participated, he said, of its own accord and as the result of direct engagement with Hong Kong regulators.

In a statement, Hong Kong financial regulators said that “advertiser verification is one of many ways social media platforms can protect the investment public.” They declined to respond to Reuters’ questions about Meta and noted that the regulators involved with the charter don't themselves have the authority to impose advertiser verification requirements.

“All social media platforms should strengthen their efforts to detect and remove fraudulent and unlawful materials,” they added.

“INDUSTRY AND REGULATORY EXPECTATIONS”

Fraud across social media platforms has surged in recent years, fueled by the rise of untraceable cryptocurrency payments, AI ad-generation tools and organized crime syndicates. Mob rings have found the business so lucrative that they employ forced labor to staff well-documented “scam compounds” that generate waves of fraudulent content from southeast Asia. Internally, Meta has cited estimates that such compounds are responsible for $63 billion in annual damage to consumers worldwide.

In some countries, regulators have determined that Meta platforms host more fraudulent content than its online competitors. In February 2024, Singapore police reported that more than 90% of social media fraud victims in the city state had been scammed through Facebook or Instagram. In a statement to Reuters, a spokesperson for Singapore’s Ministry of Home Affairs wrote that “Meta products have persistently been the most common platforms used by scammers.”

“We have repeatedly highlighted our deep concern over the continued prevalence of scams on Meta’s platforms,” the statement continued. After Reuters’ inquiries for this report, it added, Singapore authorities have asked Meta for more information and will broaden existing verification measures, including some mandating the use of facial recognition technology to prevent the impersonation of public figures. “We have reiterated that more needs to be done to secure Meta’s products and protect users from scams, instead of prioritising its profits. We have requested for a formal explanation from Meta and will take enforcement action if Meta is found to be in violation of legal requirements.”

A known weakness in Meta’s defenses is the ease of advertising on its platforms.

To purchase most advertisements, all a client needs is a user account – easily created with an email or phone number and a user-supplied name and birthdate. If Meta doesn’t verify those details, it can’t know who it’s doing business with. Even if an advertiser gets banned, there is nothing to stop it from returning with a new account. A fraudster can merely sign up again.

Meta has known about the problem for years, documents and interviews with former staffers show.

In the 2016 U.S. presidential election, fake political ads flooded Facebook with disinformation. In response, the company took steps to reduce chances that could happen again. Back then, foreign actors seeking to influence the election easily placed ads masquerading as Americans. Some Russian advertisers pretending to be American political activists even paid for such ads in rubles, Meta has said.

Starting in 2018, the company began requiring a valid identity document and a confirmed U.S. address before clients could place political ads. In addition to providing verification for the company itself, the general details, including the name and location of the advertiser, could be viewed by users, too.

Rob Leathern, a former senior director of product management at Facebook who oversaw the effort to verify political advertisers, said the added transparency and accountability led some staffers to believe that Meta would broaden it to all advertisers. “I expected that the company would have continued to do more verification, and personally felt that was something that all major platforms should be doing,” said Leathern, who left the company at the end of 2020.

Meta in 2018 also introduced its Ad Library, an easily searchable database of all ads that run on its platforms. The company, the documents show, expected to generate goodwill with the library, particularly with regards to political advertisements. Competitors, including Google, soon launched ad libraries of their own.

In the years that followed, Meta continued to acknowledge the effectiveness of both transparency and verification. So-called “know your customer policies,” Meta staffers wrote in a November 2024 document, are “commonly understood to be effective at reducing scam-risks.” They noted a competitive component, too, citing Google’s move at the start of the decade to adopt universal verification: “Google’s approach to verify all advertisers is recalibrating industry and regulatory expectations.”

Meta, however, has been reluctant to pay for it.

The internal documents show that last year Meta consulted with a company that works with Google to verify advertisers. Meta officials, according to the documents, wanted to know how much it would cost to follow suit. But the answer – at least $20 per advertiser – proved too costly for their liking, one document said.

The Meta spokesperson said that the company, regardless of cost, didn’t work with the vendor because its verification process took too long.

The potential for lost revenue has also given the company pause.

In addition to lost income from advertisers culled by verification, stricter measures could also cannibalize a paid program through which Meta already charges advertisers for similar status. The program, known as “Verified for Business,” costs clients as much as $349.99 per month and allows businesses to display a badge assuring users that Meta has authenticated their profile. Meta describes the program as more than just basic verification, offering advertisers better customer support and protections against impersonation.

Still, the documents show, Meta managers fear those revenues could shrivel if the company adopts verification for all advertisers.

“WE HAVE AN OPPORTUNITY”

In 2023, because of a sharp rise in ads for investment scams, Taiwan passed legislation ordering social media platforms to begin verifying advertisers of financial products. The self-governing island, population 23 million, is small compared to Meta’s major markets, but the company’s response there helps illustrate how resistant Meta has been to growing regulatory scrutiny worldwide.

In private conversations, the documents show, Taiwanese regulators told Meta it needed to demonstrate it was taking concrete steps to help reduce financial scam ads. When it came to financial fraud, the regulators said, Meta needed to verify the identity of those advertising financial services and respond to reports of fraud within 24 hours.

Meta, according to the documents, told Taiwan it needed more time to comply. Regulators agreed. But Meta, the documents show, in the months that followed didn’t address the problem to the government’s satisfaction.

Frustrated, the Taiwanese regulators last year issued new demands. Now, the new regulations stated, Meta and the owners of other major platforms would have to verify all advertisers. Regulators told Meta it would be fined $180,000 for every unverified scam ad it ran, Meta staffers wrote.

If it didn’t comply, the staffers calculated, the resulting fines would exceed Meta’s total profits in Taiwan. It would be cheaper to abandon the market than to disobey, they concluded.

Meta complied, rushing to verify advertisers ahead of regulators’ deadlines.

In a statement to Reuters, Taiwan’s Ministry of Digital Affairs said stricter regulations over the past year brought down rates of scam ads involving investments by 96% and identity impersonation by 94%. In addition to requiring major social media platforms to verify advertisers, Taiwan has developed its own AI system to scan ads on Meta’s platform, set up a portal for citizens to report fraudulent ads, and established public-private partnerships to detect scams, the ministry added.

Over the course of 2025, the statement said, Taiwan has fined Meta about $590,000 for four violations of the law. The ministry said it “will maintain a close watch on shifting fraud risks.”

The new rules gave Meta the opportunity to study the impact that full verification would have on its business. Before the new regulation, according to internal calculations, about 18% of all Meta advertising in Taiwan, or about $342 million of its annual ad business there, broke at least one of the company’s rules against false advertising or the sale of banned products. Unverified advertisers, one analysis found, produced twice as much problematic advertising as those who submitted verification details.

Their analyses also revealed the whack-a-mole dynamic.

Because scamming is a global business – and Meta’s algorithms allow clients to choose multiple markets in which to advertise – many advertisers seeking to place fraudulent posts do so in more than one geography. Meta experiments showed that while fraudulent ads decreased in Taiwan after the rule change, its algorithms simply rerouted them to users in other markets.

“The implication here is that violating actors that only require verification in one country, will shift their harm to other countries,” one analysis spelled out. Unless advertiser verification was “enforced globally,” staffers wrote, Meta wouldn’t so much be fighting scams as relocating them.

The documents included briefing notes prepared for Chief Executive Mark Zuckerberg about the dynamic. Reuters couldn’t determine whether the Meta boss ever saw the notes or was briefed on their contents. But the message delivered a similar conclusion. It also warned of a complication: If enforcement in one jurisdiction worsened the problem of fraud in others, regulators in the newly impacted markets were likely to crack down, too.

Meta spokesperson Stone said he couldn’t determine whether Zuckerberg received the briefing described in the document reviewed by Reuters.

Faced with the prospect of ever-expanding scrutiny, Meta considered embracing full verification voluntarily, the documents show. The goal, staffers wrote, could enable the company to appear proactive but also set terms and a timeline on its own. “We have an opportunity to set a goal of verifying all advertisers (and communicate our intention to do so externally, in order to better negotiate with lawmakers),” a November 2024 strategy document noted. Meta could “stage the rollout over time and set our own definitions of verification.”

Policy staff even planned to announce the decision during the first half of 2025, the documents show. But for reasons not specified in the documents, they postponed an announcement until the second half of the year and then cancelled it altogether. Leadership had changed its mind, a document noted, without saying why.

“MIMIC WHAT REGULATORS MAY SEARCH FOR”

Instead, Meta began to apply some of the lessons it learned in Japan.

That experience helped the company realize that Tokyo wasn’t the only government using Ad Library searches as a means of tracking online fraud. “Regulators will open up the ads library and show us multiple similar scam ads,” public policy staffers lamented in one 2024 document. Staffers also noted authorities were employing one feature that was proving especially useful: a keyword search. Unlike Google’s version, the Meta library made it easy to find scam ads through searches with terms like “free gift” or “guaranteed profit.”

Managers overseeing a revamp of the Ad Library proposed eventually killing the keyword feature entirely, the documents show. Wary of blowback from regulators, however, Meta decided not to. The Meta spokesperson said Meta is not considering it.

The company did, however, change the library so that searches returned fewer objectionable ads.

One adjustment made searches default to active ads, reducing the number of search results by eliminating content that Meta had already blocked through prior screening. The change made fraudulent ads from the past absent from new search results.

Staffers also made Meta’s systems rerun enforcement measures on all ads that appeared during new Ad Library searches, the documents show. That adjustment gave Meta a second chance to scrap violators that had previously evaded fraud filters.

One of the most useful tactics it learned in Japan was Meta’s mimicry of searches performed by regulators. After repeating the same queries, and deleting problematic results, staffers could eventually go days without finding scam ads, one document shows.

As a result, Meta decided to take the tactic global, performing similar analyses to assess “scam discoverability” in other countries. “We have built a vast keyword list by country that is meant to mimic what regulators may search for,” one document states. Another described the work as changing the “prevalence perception” of scams on Facebook and Instagram.

Meta’s perception-management tools are now part of what the company has referred to as its “general global playbook” for dealing with regulators. The documents reviewed by Reuters repeatedly reference the “playbook” as steps the company should follow in order to slow the push toward verification in any given jurisdiction.

Beginning one year ahead of expected regulation, the playbook advises, Meta should tell the local regulators it will create a voluntary verification process. When doing so, the documents add, Meta should ask those authorities for time to let the voluntary measures play out. To buy yet more time, and further gauge reactions from regulators, Meta after six months should force verification upon “new and risky” advertisers, the playbook continues.

Meta has devised a “global playbook,” summarized in the document here, to delay and weaken the push by regulators to mandate advertiser verification. Internal documents reviewed by Reuters show that verification reduces scam ads, but also costs Meta revenue. REUTERS

If ultimately regulators force mandatory verification for all, the playbook states, Meta should once again stall. “Keep engaging with regulator on extension,” one document advises.

The documents show Meta staffers celebrating the success of their efforts to change some perceptions.

In March, industry officials and regulators met for a conference in London organized by the Global Anti-Scam Alliance, a group that organizes regular gatherings to address online fraud. Meta staffers in one document celebrated the lack of scorn heaped on the company compared with previous events.

“There was a drastic shift in tone,” a project manager noted. “Meta was rarely called out whereas previously we were explicitly and repeatedly shamed for lack of action in countering fraud.”

Reporting by Jeff Horwitz. Additional reporting by Anton Bridge, Kentaro Okasaka, Yoshifumi Takemoto in Japan; Xinghui Kok in Singapore; Clare Jim, Selena Li and James Pomfret in Hong Kong; Yimou Lee and Emily Cha in Taiwan; Brad Haynes in Brazil; Phoebe Seers in UK; Panu Wongcha-um and Poppy McPherson in Thailand and Philip Blenkinsop in EU. Editing by Steve Stecklow and Paulo Prada.

Our Standards: The Thomson Reuters Trust Principles., opens new tab

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories