LLM (google/gemini-3.1-flash-lite-20260507) summary:
- Corporate Rhetoric: executives repeatedly dismiss ai model outputs as unrefined slop to maintain a narrative of necessity.
- Market Positioning: the company attempts to frame itself as a critical refinery for raw, unreliable data models produced by competitors.
- Internal Contradictions: leadership claims to develop sophisticated ai products while relying on third-party models that may eventually render their software redundant.
- Competitive Threats: former employees and rival firms are actively building data platforms that mirror existing service capabilities.
- Financial Volatility: share prices have declined despite record revenue, reflecting deep investor skepticism regarding long-term defensibility.
- Institutional Reliance: government contracts remain the primary revenue shield, masking the fragility of their commercial service model.
- Operational Limitations: specialized defense requirements constrain the company's ability to compete with lightweight, field-deployable ai solutions.
- Growth Deceleration: commercial booking speed is beginning to stagnate as market competition from agile tech startups intensifies.
Palantir Technologies owes much of its recent success to the rise of artificial intelligence, but that doesn’t mean the company’s leaders like it. “Slop,” executives declared, a total of 17 times, during a call with investors this past week, portraying the outputs of the major AI labs as too messy and unreliable for big enterprises.
“They should go out and flirt with all this slop,” Chief Executive Alex Karp said about companies that are shopping for AI. “Mostly they come home to Palantir.”
The company has long looked askance at AI. A decade ago, long before ChatGPT or Claude Code, Karp viewed AI as little more than an ad-targeting tool, people familiar with the matter said. He believed machines needed human intelligence to perform effectively and would point staff to an example of AI’s losing a game of chess to a human grandmaster.
Palantir learned to embrace AI, if not always enthusiastically, and offered it broadly to customers starting in 2023. The public swipes Palantir executives are taking at the quality of work coming from the AI labs these days reflect a concern increasingly familiar to the American worker: Palantir is at risk of being replaced, or at least rendered less necessary, by AI, according to AI company executives, current and former Palantir employees, and analysts who follow the company.
Corporate AI adoption has so far helped fuel Palantir’s blistering revenue growth as companies use large language models from OpenAI, Anthropic and Alphabet’s Google to manipulate information within Palantir’s data platform. Its growing defense-sector dominance and free access to the Trump administration, paired with Karp’s triumphalist rhetoric, has created an aura of invincibility.
Palantir Chief Technology Officer Shyam Sankar said in an interview that cheaper and open-source models have driven more business to Palantir. “We win when models get better, cheaper and more capable,” he told The Wall Street Journal. “The labs aren’t our competitors. They’re our supply chain.”
The decoupling of Palantir’s share price, down around 20% this year, from its ascendant top and bottom lines represents a bet by investors that new tools offered by the big AI labs, or made using their tools, could make Palantir’s expensive software less compelling to customers, exposing it to forces that have dragged down share prices across the software sector this year. Some experts estimate that the large language models can already replicate a majority of what Palantir does to make sense of large data sets.
In one direct threat, OpenAI is building a platform for connecting and structuring data that some said competes with Palantir, staffed in part by ex-Palantir employees, people familiar with the matter said. OpenAI and Anthropic have both raised funds at valuations above Palantir’s market capitalization.
Both companies have also replicated Palantir’s much-emulated practice of employing “forward-deployed engineers” who can embed within customers’ workforces to help drive adoption. On the call with shareholders this past Monday, Sankar appeared to take a jab at those efforts, saying the labs were trying to imitate Palantir.
Palantir reported another banner quarter of earnings, with record revenue and profit. It posted U.S. sales that more than doubled from the year prior. Yet cracks are starting to show. Among them: Growth in U.S. commercial bookings slowed to 45% from 137% in the prior quarter.
“It appears that competition with Anthropic and OpenAI is intensifying,” Louie DiPalma, an analyst at William Blair, wrote in a note.
Palantir sells software to centralize, manage and analyze large amounts of data, helping government agencies and private companies derive insights and make decisions such as supply-chain planning or where to drop munitions. Two decades after its founding, Palantir hit its groove: It turned a profit for the first time in 2023.
For a long time, AI didn’t work, at least not well, and Palantir mostly steered clear. The company only turned to AI seriously after the launch of ChatGPT, which was built with help from an early and senior Palantir employee, people familiar with the events said.
That launch brought AI to the consumer mainstream, touching off a mad scramble for tech companies to update their AI plans. In 2023, Karp announced to the world that the company had a new AI product that was “currently under development.” The revelation caught his own engineers by surprise; they weren’t building any such product, the Journal previously reported.
Palantir’s new positioning set it up to benefit from the gold rush into AI: Its stock is up around 1,600% since it unveiled its Artificial Intelligence Platform. But it is still mostly not an AI company. It doesn’t build models. AIP imports models from other companies that help make Palantir’s software more powerful, and Palantir executives argue that their software makes many models more functional, reducing the slop.
Palantir employees said language models are like crude oil, and Palantir is the refinery to make the models consumable. Yet many believe it is only a matter of time before the oil can do its own refining.
“The debate on Palantir isn’t growth. It’s whether they sit at a must-have layer of the AI stack, or if it’s just an expensive wrapper around AI models that are getting cheaper,” said Jake Behan, head of capital markets at Direxion, an asset manager.
Palantir continues to have a stronghold in its government business, where its early start in the defense sector and deep relationships in Washington limit its vulnerabilities. In the first year of the second Trump administration, Palantir was awarded more than $1.1 billion in federal contracts, a 70% increase from the prior year.
Palantir has become the operating system for the Defense Department. Its command-and-control Maven Smart System is set to become an official program of record, a highly desired status for defense contracts.
AI companies vying for a spot in the Pentagon said Palantir has in effect become a gatekeeper. The inciting incident in the quarrel that led to the Pentagon’s labeling Anthropic as a supply-chain risk and banning its use in defense work was a meeting between Palantir and Anthropic employees at which an Anthropic employee asked how the company’s Claude model was used in the January military operation in Venezuela, the Journal previously reported.
But even the Pentagon is expanding beyond Palantir. With a mandate to move quickly on AI, Pentagon leadership is extending model adoption from headquarters to the field where soldiers operate. The more-lightweight models that startups are designing to run on soldiers’ phones or drones often aren’t compatible with Palantir, AI startup executives said. Palantir has responded with a new version of Maven that works on drones.
Ben Van Roo, co-founder of the AI defense startup Legion Intelligence, said that Palantir’s Maven has been a success but that “it’s a subset of workflows out of thousands” in the department. There is a need for many more AI solutions that can help with intelligence gathering, logistics and other aspects of war that will happen outside Palantir. “That’s the decade ahead,” he said.
The Global AI Race
Coverage of advancements in artificial intelligence, selected by the editors
Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
Heather Somerville is a reporter at The Wall Street Journal in San Francisco covering technology and national security. Her articles explore the national-security implications of emerging technology, U.S. efforts to counter China's rise as a technology power, and the relationship between Silicon Valley and the U.S. defense complex.
Heather joined the Journal in 2019 to cover venture capital and technology companies. Before that, she wrote about venture capital and Silicon Valley startups for Reuters and the Mercury News. She was previously a reporter for the Fresno Bee and the Charlotte Observer and wrote about national security for outlets in Washington, D.C.



