Strategic Initiatives
12081 stories
·
45 followers

Why More Weimar? - by Andrew Klavan - THE NEW JERUSALEM

1 Share
  • Journalistic Tendency: Most journalists simplify history to a basic narrative: "Rome fell, then Hitler came."
  • Weimar Comparison: Several contemporary headlines draw parallels between the current United States situation and America's "Weimar Moment" or "Weimar 2.0."
  • Decadence Parallels: The United States is acknowledged to be in a period of dissolution and transition, similar to Weimar, potentially marked by polarized politics and cultural decay.
  • Skewed Weimar Image: The common perception of Weimar, often shaped by media like Cabaret or Babylon Berlin, incorrectly suggests a time of merely delightful artistic creativity overshadowed by Nazis.
  • Historical Patterns: Societal or personal histories follow either a rising spiral (wise, creative recurrence), a descending spiral (darker, self-destructive recurrence), or a circle (repetitive, stagnant behavior).
  • Coping with Dissolution: Historical dissolution has led to varied outcomes, citing the Victorian era after the French Revolution's decadence, and the 1950s following the US in the 1920s and 30s.
  • Role of Baby Boomers: The author identifies the falling away of the Baby Boomer generation as a primary cause of the current decline, citing their accomplishments in areas like virtue signaling and waste of accumulated wealth.
  • Path Forward for People: Suggested actions for the populace during this transition include ordering one's sex life toward family creation, prioritizing facts and workable political solutions based on founding principles, and pursuing beauty in culture and life, alongside loving God.

In the minds of most journalists, all human history can be summed up with the words, “Rome fell, then Hitler came.” A few more accomplished scholars among the chattering classes might be able to distinguish between the fall of Rome’s republic, followed by the empire, and the fall of the empire, followed by a thousand years of hideously faith-infused darkness. But basically, to journalists, all signs of political or cultural decadence — whether it’s a Republican being elected or some other Republican being elected — are omens that American Rome is about to collapse followed by the rise of American Hitler.

“A Confessing Church for America’s Weimar Moment,” reads a headline in The Dispatch. “We must not let the shooting of Charlie Kirk become Trump’s Reichstag fire,” writes a genius at The Guardian. “Welcome to Weimar 2.0,” headlines a book excerpt by the catastrophist Robert D. Kaplan in Foreign Policy. And that’s not even counting the continual Hitler forecasts on cable TV. One could almost feel sorry for der old Fuhrer, his psycho ass getting dragged out of hell every ten minutes to serve as a warning to us all.

THE NEW JERUSALEM is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

And look, yes, it’s true, the United States is in one of those periods of dissolution and transition that come upon every nation from time to time. So too was Weimar, the unstable republic that rose up in the wake of Germany’s crushing defeat in World War I. Such periods are often marked by polarized politics, destructive sexual profligacy wrongly celebrated as liberation, and art forms informed by and infused with the general cultural decay. So with America. So with Weimar.

Beyond that, however, the comparison is a shallow one. For one thing, the image of Weimar among our clerisy is skewed with bias. Watch any movie set in the era — the musical Cabaret is a good one — or any television series — Babylon Berlin is my favorite — or read histories like Peter Gay’s famous Weimar Culture. You’ll get the sense that you’re learning about a time of delightful but doomed sexual and artistic creativity, with benign but hapless socialists slowly falling beneath the darkening Nazi shadow.

But no. The undisciplined sex lives, the anti-freedom politics of both the left and the right, and the shocking but ultimately ephemeral cultural productions were all one thing, all part of the same process of decay. Weimar — and the world wars that preceded and followed it — marked the end of a civilization — the greatest civilization that has yet existed on planet Earth — the civilization that was Europe from around 1345 to 1914. It was this, in fact, that gave Hitler his rhetorical super power: he diagnosed the decadence correctly. He just didn’t realize he was its ultimate manifestation. Ah well. Even a mass murdering racist lunatic can’t be right all the time.

[

](https://substackcdn.com/image/fetch/$s_!cwy4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1309882c-f16f-4d3a-b7f6-2c7fadfe95a3_1024x1539.jpeg)

So the U.S. is in a decadent state and so was Weimar, and decadence anywhere looks like decadence everywhere. History does repeat itself.

But it repeats itself for a reason. It’s the same reason a person’s life tends to repeat itself. The human race, like each of its individual members, has a nature, a character. Like everything that exists, character is bounded by its defining limits. Those limits cause it to express itself in a finite number of patterns. Over time, those patterns are certain to reassert themselves in recognizable forms.

The patterns of a healthy society or person will be shaped something like a rising spiral. Certain arcs of activity will recur over and over, but they will recur in increasingly wise, creative and productive ways. There will be periods of dissolution but they will be what is sometimes referred to as “chaos leading to a higher plane.” You can trace such a pattern in the plays of William Shakespeare. You can find it in the history of England between Elizabeth and the end of World War II.

The history of a rotten society or person will form a descending spiral, the repeated arcs of foolishness and evil growing ever darker and more self-destructive. The life of every tyranny is like this. The life of every addict too.

Then there’s the history of an ignorant or neurotic society or person. This will simply look like a circle. The subject goes round and round on a level track while uttering occasional gasps of wonder at having found themselves at a place they’ve passed a hundred times before but have a hundred times forgotten. You know people like this. Every now and then, they announce they’ve had some smack-the-forehead revelation that will change their lives forever. Then they return to the exact same ideas and behaviors they’ve had before. Perpetually primitive tribes are also like this, no matter how we want to romanticize them.

In short, times of dissolution will indeed be followed by a transition. But Hitler — get ready for it — is not the only possible outcome. The time surrounding the French Revolution in Great Britain showed many of the usual signs of decadence, but the Victorian era that followed was arguably Britain’s greatest age. So with the U.S. in the 1920’s and 30’s. Those decadent days led to the mighty 1950’s and a new American half century. As with each individual, what follows a nation’s dissolution all depends on the essential health of the nation’s character.

Where then are we?

It seems obvious to me that the primary cause of our current decline is the falling away of the Baby Boomers. My age cohort has dominated the political and cultural landscape for seventy years. Now, we’re played out and heading for the exits. Our accomplishments were many. We made hypocritical virtue signaling an art, introduced self-destructive drug use to the general population, slowed the rise of black Americans by burying them under the smothering dependencies of the Great Society, and wasted all the money accumulated by every generation before us along with some money that will be accumulated by the generations to come. We also popularized the personal computer so, like, yay for us.

What the U.S. will become when we are gone is, of course, not yet clear, but it’s a good bet it will be defined by the way we use the new forms of communication that have been and are still being generated by technology. Who or what will dominate those engines of culture is largely what all the current division, panic, bullying and violence is intended to decide.

Periods like this are often resolved by the rise of a powerful personality. Will this person shape the coming Zeitgeist or merely ride its wave? Your answer will depend on whether you agree with Thomas Carlyle that “the History of the world is but the Biography of great men,” or with Tolstoy who said, “great men—so called—are but the labels that serve to give a name to an event.” Like the question of nurture or nature, this question is impossible to resolve, probably because, like the question of nature and nurture, the binary itself is an illusion. But as I say, contra the journalists, Hitler is not the only model for such era-defining personalities. George Washington and Queen Victoria are far more benign examples of the breed. Luther and Napoleon were mixed blessings. History is like a box of chocolates. You never know what you’re going to get.

When we scan the horizon for the coming definitive personality, it’s pretty hard to see the horizon with the gigantic Donald Trump standing in the way. So yeah, maybe he’s the guy. He has no coherent ideology so his effect may not be a lasting one. But he does bear all the marks of a Tolstoyan Godzilla. Like that monster created by nuclear blasts, Trump is the product of our generation’s eruption of materialist vulgarity. But he is also the avatar of the American everyman in our time. Like that character — much maligned by the elites who feed off his bounty — he is blunt, common sensical, creative, independent and bold. If he uses these traits to blast away such fatal nonsense as open borders, transgenderism and socialism — if he restores our simple patriotism and can-do spirit — the imprint of his nature may well be Trump’s legacy to the new age.

But this is still America. Perhaps the great figure that will shape our 21st Century is not Trump or any other individual, but we, the people. In that case, Weimar, with its sexual libertinism, its artistic DaDaism and its polarized politics, actually does serve as a guide — a guide to what each and every one of us should not do.

Let we, the people, then, like healthy men with healthy minds, eschew the habits of decadence.

Get your sex life in order. Give up porn and feminism and other such perversions, and yolk your desires to their natural purposes: the creation of families, the solidifying of marriages and the maintenance of self-governing homes.

In politics, murder your isms. Follow the facts instead. Do what works, what keeps people free, what makes free people happy, what encourages the wealthy but protects the poor. Our founding principles are still the best guides ever invented to achieving those ends. Use them. Make our leaders use them too.

And in the culture, forget the overrated shock of the new. In the arts as in life, follow one rule: Make something beautiful. Whether it’s a movie or an invention, a business or a family, a community at peace or simply a life lived lovingly and well. Turn your back on every ugliness and cruelty. Head down the path of beauty and, like the prodigal son, you will see your Father running to greet you from afar.

And that’s another thing — one last thing — love the Lord your God with all your heart and mind. In general, they neglected to do that in Weimar and ended up worshipping the devil instead. No matter what anyone tells you, those are the only two choices. Choose right, and I guarantee you, Hitler will stay in hell — or on cable TV, which amounts to the same thing — where he belongs.

THE NEW JERUSALEM is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Chins have no point | News | ConnectSci

1 Share
  • Chin Uniqueness: No other living apes or extinct hominins, including Neanderthals, possess a bony protuberance identifiable as a true chin.
  • Emergence of Chin: The human chin first appeared with the emergence of Homo sapiens approximately 200,000 to 300,000 years ago.
  • Existing Theories: Past explanations for the chin's origin posited it resulted from natural or sexual selection, suggesting roles in diet adaptation, speech, or as sexual ornamentation.
  • Research Shortcoming: A consensus on the chin's evolutionary origins remains elusive despite research efforts, partly due to theoretical and empirical shortcomings in prior explanations.
  • Spandrel Concept: Some argue chins are "spandrels"—accidental byproducts of other evolutionary processes, a term coined by Gould and Lewontin.
  • New Contention: A recent study in PLOS One contends the human chin evolved largely by accident, not direct selection, as a byproduct of selection acting on other skull regions.
  • Methodology Used: Researchers tested this null hypothesis by comparing traits across ape and human skulls.
  • Conclusion on Selection: The study suggests changes in the chin region result from selection on other parts of the jaw and skull, providing evidence against the assumption that all physical traits result from positive selection.

Evolutionary anthropologists have long debated why humans have chins.

No other living apes have chins. No extinct hominin – including direct ancestors of modern humans like Homo erectus and close relatives like Neanderthals – had chins.

Chins are uniquely human – there is no other animal with a bony protuberance which can be identified as a true chin. The first chins emerged about 200,000 to 300,000 years ago with the emergence of our species Homo sapiens.

So… why?

Scientists have put forward several theories as to the origin and purpose of chins (apart from growing epic goatees).

“Despite high research effort into the chin's evolutionary origins, a consensus for this feature remains elusive, partly because all prior explanations have theoretical and/or empirical shortcomings,” writes University of Florida anthropologist James Pampush in a 2015 paper.

Some have argued that chins are the result of natural or sexual selection. Chins could, they argue, have been an adaptation to changing diet, speech or even a sexual ornamentation.

Others argued that chins have not been directly produced by natural selection. Such accidental byproducts in organisms are known as “spandrels”.

The term spandrel was coined in 1979 by palaeontologist Stephen Jay Gould and evolutionary biologist Richard Lewontin, borrowing the word from architecture where forms and spaces arise as necessary byproducts of another design decision, for example the space under stairs.

A new paper published in the journal PLOS One contends that the human chin is nothing more than an evolutionary accident.

“The chin evolved largely by accident and not through direct selection, but as an evolutionary byproduct resulting from direct selection on other parts of the skull,” says first author Noreen von Cramon-Taubadel, from the University at Buffalo, USA, in a media release.

“Just because we have a unique feature, like the chin, does not mean that it was shaped by natural selection to enhance an animal’s survivability, for example a buttress for the lower jaw to help dissipate the forces of chewing,” says von Cramon-Taubadel. “The chin is likely a byproduct, not an adaptation.

“The chin is likely a byproduct, not an adaptation.”

The team tested this “null hypothesis” – the claim that the phenomenon being studied does not exist – by comparing the traits of ape and human skulls.

black and white line drawing diagram of ape skull and mandible from different angles

Craniomandibular landmarks with all 22 cranial and 24 mandibular interlandmark distances considered here. Credit: PLOS One (2026). DOI: 10.1371/journal.pone.0340278

“It’s only by studying the whole that we can better understand what aspects of an animal have a functional purpose and what are the side products of that purpose.”

“While we do find some evidence of direct selection on parts of the human skull, we find that traits specific to the chin region better fit the spandrel model,” von Cramon-Taubadel says. “The changes since our last common ancestor with chimpanzee are not because of natural selection on the chin itself but on selection of other parts of the jaw and skull.”

The researchers say this research challenges the false assumption that all physical characteristics have are the result of a positive selection bias in evolution or have a specific function.

“Generating empirical evidence against that line of reasoning is an important goal of this study and biological anthropology in general,” von Cramon-Taubadel says. “The findings underscore the importance of assessing the evolution of physical characteristics with trait integration in mind.”

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

SpaceX and xAI Enter Secret $100M Pentagon Contest for Autonomous Drone Swarms

1 Share
  • Secret Contest: SpaceX and xAI are competing in a clandestine $100 million Pentagon contest to develop autonomous drone swarm software.
  • Contradictory Stance: Elon Musk previously signed a 2015 open letter warning against autonomous weapons requiring "meaningful human control."
  • Scope of Work: Unlike OpenAI, which limits its role, SpaceX and xAI plan to work on the entire project, up to mission execution and targeting phases.
  • Pentagon Program: The competition, run by the Defense Innovation Unit and the Defense Autonomous Warfare Group (DAWG), seeks software to translate spoken commands into drone coordination across air and sea.
  • xAI's Growth: xAI is rapidly hiring personnel with high-level security clearances for this defense work, having previously secured contracts for chatbot deployment within government sites.
  • Financial Incentive: The Pentagon contracts provide a crucial revenue stream for xAI, which reportedly carries significant debt and faces revenue needs outside of consumer adoption.
  • Anthropic Exclusion: Anthropic may be designated a "supply chain risk" by the Pentagon for refusing to drop usage policy restrictions related to autonomous weapons and mass surveillance.
  • Internal Concerns: Individuals familiar with the drone swarm program have expressed anxiety regarding LLMs potentially making lethal decisions due to potential fabrication (hallucinations) in command outputs.

AI News

SpaceX and xAI Enter Secret $100M Pentagon Contest for Autonomous Drone Swarms

SpaceX and xAI are competing in a secret $100M Pentagon contest to build autonomous drone swarm software, Bloomberg reports. Musk signed a 2015 letter opposing such weapons.

  • Maria Garcia

Maria Garcia Feb 16, 2026

SpaceX and xAI Enter Secret Pentagon Drone Swarm Contest

February 16 2026 11:45 AM (UTC-08:00 • PST) 11 min read

Share Share Share Share [Email](mailto:?subject=SpaceX%20and%20xAI%20Enter%20Secret%20%24100M%20Pentagon%20Contest%20for%20Autonomous%20Drone%20Swarms&body=https://www.implicator.ai/spacex-and-xai-enter-secret-100m-pentagon-contest-for-autonomous-drone-swarms/ SpaceX%20and%20xAI%20Enter%20Secret%20%24100M%20Pentagon%20Contest%20for%20Autonomous%20Drone%20Swarms "Share by email")

SpaceX and its subsidiary xAI are competing in a secret Pentagon contest to build voice-controlled autonomous drone swarming technology, Bloomberg reported Sunday. The $100 million prize challenge, launched in January by the Defense Innovation Unit and the Defense Autonomous Warfare Group, will develop software that translates spoken battlefield commands into digital instructions for coordinating drone swarms across air and sea. Neither SpaceX nor xAI responded to Bloomberg's requests for comment.

Their participation had not been previously disclosed. Musk's companies are among a small number of entrants selected for the six-month competition, according to people familiar with the matter who requested anonymity to discuss a classified program. The entry is a direct reversal for Musk. He signed an open letter in 2015 from AI researchers warning against autonomous weapons that "select and engage targets without meaningful human control." A decade later, his engineers will build software for a program the Pentagon describes as progressing through "target-related awareness" to "launch to termination."

The Breakdown

• SpaceX and xAI are competing in a secret $100M Pentagon contest to build voice-controlled autonomous drone swarm software.

• Musk signed a 2015 letter opposing autonomous weapons. His companies will now work on the full project, including targeting phases.

• OpenAI is also involved but limits its role to voice-to-command translation. SpaceX and xAI set no such boundary.

• The Pentagon may designate Anthropic a "supply chain risk" for refusing to drop restrictions on autonomous weapons and mass surveillance.

Five phases, from software to live weapons

The competition runs over six months in five phases. Early rounds focus only on software development. Later stages move to live platform testing, multi-domain drone coordination across air and sea, and what the Pentagon calls mission execution.

A defense official was blunt about the ambition when the contest was first announced. The human-machine interaction being developed, the official said, "will directly impact the lethality and effectiveness of these systems."

Voice-to-command translation is the centerpiece. A commander speaks an order into a headset. AI software converts the words into digital instructions. Multiple drones receive those instructions simultaneously and act as a coordinated swarm, adjusting behavior autonomously while pursuing a target. Flying several drones at once is already routine. Directing a swarm that adapts on its own, that splits and reforms across air and sea, that makes real-time decisions without waiting for human input on each maneuver, is the problem nobody has solved at scale.

DAWG, the Defense Autonomous Warfare Group, was created under the second Trump administration as part of US Special Operations Command. It continues the work of the Biden-era Replicator initiative, which sought to produce thousands of expendable autonomous drones. Replicator was about volume. DAWG wants coordination, swarms that think together and respond to human speech in real time.

xAI's defense buildup

xAI is not treating this as a side project. Job postings on the company's website call for software engineers with active "secret" or "top secret" security clearance, experience working with the Department of Defense or federal contractors, and fluency in AI and data projects. The listing promises a hiring process complete within a week. That urgency matters. Defense hiring with clearance requirements normally takes months.

The company already has a footprint in Pentagon contracting. xAI announced in December that its Grok chatbot would be deployed across government sites to "empower military and civilians." Before that, it secured a $200 million contract to integrate xAI into military systems. Those earlier deals centered on information tools, chatbots and analysis platforms. Building the software that coordinates autonomous weapons platforms is a different category of work entirely, and one Musk once argued should not exist. But xAI needs the revenue. The company carries billions in debt, faces regulatory scrutiny after Grok produced sexualized images, and brings in far less money than SpaceX. Pentagon contracts offer a funding stream that doesn't depend on advertising or consumer adoption.

SpaceX, for its part, is a longtime defense contractor, but its military portfolio has stayed close to rockets and satellite communications. SpaceX, Boeing, and Lockheed Martin handle the Pentagon's most sensitive satellite launches. Offensive weapons software is a different business. Rockets go up. Drone swarms go hunting.

Just weeks ago, SpaceX and xAI agreed to merge in a deal valued at $1.25 trillion. Musk's announcement framed the combination as "the most ambitious, vertically-integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile device communications and the world's foremost real-time information and free speech platform." The word "weapon" did not appear. Neither did "drone" or "military." The Pentagon contest fills in what the press release left out.

Stay ahead of the curve

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

OpenAI draws a line that SpaceX does not

OpenAI is also involved in the contest, but with strict limits on its role. The company is supporting bids from two defense technology partners, including Applied Intuition, Bloomberg reported last week. OpenAI's contribution covers only the "mission control" element, converting voice commands into digital instructions. Its technology will not control drone behavior, handle weapons integration, or hold targeting authority, according to a submission document Bloomberg reviewed.

A spokesperson said OpenAI's open-source models were included in contest bids and that the company would ensure any use stayed consistent with its usage policy. OpenAI did not submit its own bid. Its involvement came through existing defense partners who embedded the models in their proposals. Sam Altman has said he does not expect OpenAI to develop AI-enabled weapons platforms "in the foreseeable future," though he has left the door open. The distinction is fine but deliberate: providing tools is different from building weapons, even if the tools end up inside weapons systems.

SpaceX and xAI set no such boundary. People familiar with the matter told Bloomberg that the two companies plan to work on the entire project together, including later phases involving target awareness and mission execution. You can draw a clean line between translating a commander's voice into code and directing a drone to strike a target. OpenAI stays on one side of that line. SpaceX and xAI do not.

The companies that won't play

Not every AI company wants in. A senior administration official told Axios this week that the Pentagon is "close" to designating Anthropic a "supply chain risk," a classification that would force every military contractor to sever ties with the AI company. Anthropic has maintained hard limits on autonomous weapons development and mass domestic surveillance in its usage policies. Pentagon officials treat those restrictions as operational obstacles. Anthropic treats them as non-negotiable.

The swarm contest is one piece of a larger Pentagon drone push. The Defense Department has a separate Drone Dominance Program with more than $1 billion in planned spending, and Defense Secretary Pete Hegseth launched it last year with a memo titled "Unleashing U.S. Military Drone Dominance." Anthropic is walking away from that money. SpaceX and xAI are running toward it.

The Pentagon's own people are nervous too

Even inside the building that commissioned this contest, the idea of wiring generative AI into weapons systems has generated real anxiety. Several people with knowledge of the drone swarm program told Bloomberg they worried about what happens when large language models translate voice commands into operational decisions without a human checking the output.

LLMs hallucinate. They generate text that reads as authoritative but is fabricated. In a consumer product, a hallucination produces a wrong restaurant recommendation or a fabricated citation. In a system coordinating autonomous drone swarms in a combat zone, a fabricated output could misdirect a lethal strike. The people Bloomberg spoke with said it would be critical to keep generative AI restricted to translating commands, not to let it make decisions about drone behavior or targeting on its own.

The Pentagon's AI Acceleration Strategy, released in January, does not dwell on those concerns. The document calls for "unleashing" AI agents across the battlefield, from campaign planning to targeting, with language broad enough to accommodate lethal autonomous action. Defense contracts involving AI have a history of provoking backlash from the companies tasked with building them. Google tried this once. Employees revolted over Project Maven, a Pentagon program that used AI to analyze drone footage, and the company pulled out. That lesson stuck inside Big Tech for years. But the political and commercial incentives have shifted. OpenAI is now a Pentagon partner. xAI has a $200 million military contract. Anthropic held the line on its restrictions and may get cut from defense work entirely. An OpenAI researcher recently left over concerns about ads in ChatGPT. At Anthropic, another resigned publicly, citing worries about where AI development is headed. The exits have not slowed the contracting.

For now, the drone swarm contest sits in phase one. Software only, no live platforms. But the procurement documents describe where the program is headed, and every company selected to compete signed on knowing the trajectory. Musk's 2015 letter warned against weapons that operate "beyond meaningful human control." His companies' entry in this contest will test whether that phrase still carries weight when a hundred million dollars and a Pentagon contract sit on the other side of the table.

Frequently Asked Questions

Q: What is the Pentagon's drone swarm contest?

A: A $100 million prize challenge launched in January by the Defense Innovation Unit and the Defense Autonomous Warfare Group. It aims to develop software that translates spoken battlefield commands into instructions for coordinating autonomous drone swarms across air and sea over six months.

Q: What is the Defense Autonomous Warfare Group?

A: DAWG was created under the second Trump administration as part of US Special Operations Command. It continues the Biden-era Replicator initiative but focuses on drone swarm coordination rather than just producing large numbers of expendable drones.

Q: How is OpenAI's involvement different from SpaceX's?

A: OpenAI limits its role to the mission control element, converting voice commands into digital instructions. It will not control drone behavior or handle weapons integration. SpaceX and xAI plan to work on the entire project, including later targeting and mission execution phases.

Q: Why might the Pentagon designate Anthropic a supply chain risk?

A: Anthropic has refused to remove usage policy restrictions on autonomous weapons development and mass domestic surveillance. A supply chain risk designation would force every military contractor to cut ties with the company, effectively locking Anthropic out of defense work.

Q: What is the Drone Dominance Program?

A: A separate Pentagon initiative with more than $1 billion in planned spending across four phases. Defense Secretary Pete Hegseth launched it to field hundreds of thousands of cheap, weaponized attack drones by 2027. The first phase selected 25 vendors to compete.

[

Europe's Trade Bazooka. Silicon Valley Holds Its Breath.

Thirty-six soldiers. That's what European nations sent to Greenland last week in a gesture of solidarity with Denmark. A training exercise, they called it. President Trump saw provocation. On Saturday

The Implicator

](https://www.implicator.ai/europes-trade-bazooka-silicon-valley-holds-its-breath/)

[

Europe Aims at Silicon Valley. The IMF Sees 2001.

San Francisco | January 20, 2026 Trump's Greenland tariffs just made the EU reach for its "economic nuclear weapon." The Anti-Coercion Instrument can revoke business licenses, ban government contract

The Implicator

](https://www.implicator.ai/europe-aims-at-silicon-valley-the-imf-sees-2001/)

[

The Bet Behind the Valuation: OneBrief and the Race to Own Military Planning

In September 2025, an Army exercise called Scarlet Dragon 25-3 ran a test that should worry OneBrief's investors. The Army's Next Generation Constructive simulation, the future of how commanders will

The Implicator

](https://www.implicator.ai/the-bet-behind-the-valuation-onebrief-and-the-race-to-own-military-planning/)

Published in: AI News

Author

Maria Garcia

Maria Garcia

Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.

View articles

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Anthropic's $3 Model Matched Its $15 Flagship. The Spread Collapsed.

1 Share
  • Model Performance Compression: Anthropic's Sonnet 4.6, priced at $3 per million input tokens, matches or surpasses the performance of its premium counterpart, Opus 4.6 ($15/M), on several key enterprise benchmarks.
  • Enterprise Benchmark Superiority: Sonnet 4.6 scored higher than Opus 4.6 on the GDPval-AA real-world office performance benchmark (1633 vs 1606) and agentic financial analysis (63.3% vs 60.1%).
  • Self-Undercutting Strategy: Anthropic is intentionally commoditizing its premium tier by releasing a cheaper model that performs comparably, signaling a focus on high-volume infrastructure sales over high-margin luxury access.
  • Volume Driven Growth: The strategy is supported by significant growth in large accounts; customers spending over $100,000 annually increased sevenfold year-over-year, with million-dollar-plus accounts growing from 12 to over 500.
  • Erosion of SaaS Layers: Sonnet 4.6 achieved a 72.5% score on OSWorld-Verified (computer use), nearly five times its initial 14.9% score, threatening software vendors whose value resides in human-interface integration.
  • Autonomous Planning Capability: In the Vending-Bench simulation, Sonnet 4.6 demonstrated long-horizon strategy execution, tripling the profit balance of its predecessor over a simulated year by choosing initial capacity investment over immediate returns.
  • Competitive Landscape Shift: Competitors are noted to be lagging, with GPT-5.2 trailing significantly on agentic computer use (38.2% vs. 72.5%), compelling rivals like OpenAI to hire specialists in agent capabilities.
  • Opus Future Niche: Opus 4.6 retains an advantage only in the most demanding reasoning tasks like codebase refactoring, but if the next Sonnet iteration closes even half the remaining performance gap, the premium tier risks becoming a niche product.

Two weeks. That's how long Anthropic's premium tier held its lead.

Earlier this month, the company shipped Opus 4.6 at $15 per million input tokens, billing it as the smartest model for enterprise work. Financial analysts praised it. Software stocks cratered on the implications. The iShares Expanded Tech-Software Sector ETF has dropped more than 20% year to date, partly on fears that AI agents would start filling out the forms and processing the documents those companies sell software for.

Then on Tuesday, Anthropic shipped Sonnet 4.6 at $3 per million input tokens. One-fifth the Opus price. And the benchmarks showed it beating its flagship on office tasks, financial analysis, and several categories enterprises actually care about. On GDPval-AA, the benchmark measuring real-world office performance, Sonnet 4.6 scored 1633 to Opus 4.6's 1606. On agentic financial analysis, Sonnet hit 63.3% versus Opus at 60.1%.

If you run AI agents at scale, the math just shifted under your feet. The model that costs five times less now outperforms the expensive one on the work most businesses actually do.

The Breakdown

  • Sonnet 4.6 at $3/M tokens matches or beats Opus 4.6 ($15/M) on most enterprise benchmarks
  • Enterprise customers are migrating from Opus to Sonnet, collapsing Anthropic's own premium tier
  • Computer use scores jumped from 14.9% to 72.5% in 16 months, threatening SaaS integration layers
  • In Vending-Bench simulation, Sonnet 4.6 tripled its predecessor's returns with autonomous long-horizon strategy

The spread that ate itself

Think of this in financial terms. When the spread between a premium asset and a cheaper substitute narrows to zero, the market stops paying for the difference. Anthropic just did this to its own product line.

Sonnet 4.6 didn't sneak up on Opus. It landed on top of it. On SWE-bench Verified, Sonnet scored 79.6%. Opus managed 80.8%. Barely a point between them. On OSWorld-Verified, which tests how well a model can actually use a computer, Sonnet hit 72.5%. Opus scored 72.7%. That's 0.2 points separating a $3 model from a $15 one.

The company positioned this as good news. "Performance that would have previously required reaching for an Opus-class model is now available with Sonnet 4.6," the announcement read. Translation: we just commoditized our own top shelf.

Credit: Anthropic

Enterprise customers noticed immediately. Caitlin Colgrove, CTO of Hex Technologies, told VentureBeat her company is moving the majority of its traffic to Sonnet 4.6. "At Sonnet pricing, it's an easy call for our workloads," she said. Ryan Wiggins at Mercury Banking was more blunt. "We didn't expect to see it at this price point."

That's not what you say about a routine update. You say it when the floor drops out.

Why Anthropic is eating its own premium

This looks like self-harm. It isn't.

Anthropic is running a volume play, and the $30 billion fundraise at a $380 billion valuation that closed last week tells you everything about the stakes. Every model release compresses the spread further, and that's by design.

The numbers from Axios spell it out. Customers spending over $100,000 annually on Claude increased sevenfold year over year. The million-dollar-plus accounts? Roughly 12 two years ago. More than 500 now. So much for the luxury tier. It's selling infrastructure. Infrastructure wins on volume, not margin.

Consider what happens when an enterprise deploys AI agents that process 10 million tokens per day. At Opus pricing, that runs $150 in input costs alone. At Sonnet pricing, $30. Multiply by hundreds of agents running around the clock for months. The gap between the two tiers isn't a rounding error. It determines whether you deploy at all.

Claude Code, Anthropic's developer-facing terminal tool, became a cultural force in Silicon Valley over the past year. Engineers build entire applications through natural-language conversation now. The New York Times profiled its rise in January. In early testing, developers preferred Sonnet 4.6 over its predecessor about 70% of the time. They even preferred it to Opus 4.5, the company's flagship from last November, 59% of the time. Less overengineering. Better follow-through on multi-step tasks.

But vibe coding runs on tokens, and tokens run on cost. Pushing near-Opus intelligence into the Sonnet tier makes every Claude Code session cheaper. More sessions mean more lock-in. More lock-in means the $380 billion valuation looks conservative, not aggressive. Michele Catasta, president at Replit, called the performance-to-cost ratio "extraordinary." David Loker, VP of AI at CodeRabbit, said the model "punches way above its weight class for the vast majority of real-world PRs." GitHub's VP of Product, Joe Binder, confirmed it's "already excelling at complex code fixes, especially when searching across large codebases."

The pattern is consistent. Enterprise after enterprise is migrating Sonnet traffic and canceling Opus API calls for anything short of the hardest reasoning work. Leo Tchourakov of Factory AI said the team is already "transitioning our Sonnet traffic over to this model." That sentence, repeated across enough customers, is what makes premium pricing collapse.

Computer use went from party trick to real threat

Buried in the benchmark numbers is a trajectory that should make every SaaS company anxious.

In October 2024, Anthropic introduced computer use, the ability for an AI model to look at a screen, click buttons, type text, and work with software the way a human would. It scored 14.9% on OSWorld at launch. "Still experimental," Anthropic said at the time. "Cumbersome and error-prone."

Stay ahead of the curve

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

That was sixteen months ago. Each new Sonnet pushed the score higher. 28% by February 2025, then 42.2% by June. 61.4% by October. Now 72.5%, almost five times the original.

This matters because most business software wasn't built with APIs. Insurance portals, government databases, ERP systems, hospital scheduling tools. All designed for humans clicking through forms. A model that can do the clicking, reliably and cheaply, doesn't need an integration partner. It doesn't need a connector. It needs a screen.

Jamie Cuffe, CEO of Pace, said Sonnet 4.6 hit 94% on their insurance-specific computer use benchmark, the highest of any Claude model tested. "It reasons through failures and self-corrects in ways we haven't seen before," he told VentureBeat. Ben Kus at Box said it outperformed the previous Sonnet by 15 percentage points on heavy reasoning over enterprise documents.

And here is where the anxiety in enterprise software gets specific. Bloomberg reported that Anthropic's quiet release of a legal automation tool earlier this month helped trigger a trillion-dollar tech wipeout, hitting software companies hardest. The Opus release hammered financial services stocks. Some private software companies, including McAfee, began releasing earnings early just to reassure investors worried about what commentators dubbed the "SaaSpocalypse."

Each new capability chips away at the same question: which vendors does this make optional? When a $3 model can fill out insurance forms and reason through dense enterprise documents at near-human reliability, the integration layer that many SaaS companies sell starts looking like overhead.

What the Vending-Bench test actually revealed

The most revealing data point in the entire release had nothing to do with code generation or benchmarks. It came from a simulation.

Vending-Bench Arena pits AI models against each other in a simulated business. Each model runs autonomously for a simulated year, making investment and pricing decisions to maximize profit. No human prompts during the run. Pure autonomous planning over a 365-day horizon.

Sonnet 4.6 developed a strategy nobody told it to try. It spent heavily on capacity for the first ten months, outspending its competitors by wide margins. Then it pivoted hard to profitability in the final stretch. It finished at roughly $5,700 in balance. Sonnet 4.5 managed about $2,100. Nearly triple the return.

What stood out wasn't the profit margin. It was the patience. Sonnet 4.6 chose to lose money for ten months, absorbing short-term costs to build capacity it would monetize later. That's not autocomplete. That's capital allocation on a time horizon most quarterly-focused businesses struggle with.

Vending machines don't matter here. Time horizons do. A model that plans over months and adjusts when conditions shift isn't a chatbot anymore. It's the cheapest analyst in the building. The one-million-token context window makes it possible. You can feed it an entire codebase, a full contract portfolio, or dozens of research papers, and it reasons across all of it.

For enterprise buyers, this changes the deployment math. A model that plans autonomously over real time horizons isn't a better chatbot. It's a cheaper analyst, one that processes a million tokens of context without forgetting page one. At $3 per million tokens, running an agent like this around the clock for a month costs less than a single day of management consulting. The question shifts from whether to deploy to how many to run.

Who loses when the spread hits zero

Anthropic is emboldened. Fresh off the largest AI funding round ever, with a model line that compresses the performance gap between its tiers every quarter, the company is expanding into India, partnering with Infosys to build enterprise agents, and shipping product at a pace its competitors can't match. Two major model releases in 12 days. A Haiku update likely next.

OpenAI feels the pressure from both sides. The same week Sonnet 4.6 launched, OpenAI hired the creator of OpenClaw, an open-source computer use tool, signaling it's scrambling to close a gap on agent capability that Anthropic has owned for over a year. GPT-5.2 trails badly on agentic computer use: 38.2% to Sonnet's 72.5%. On financial analysis, 59.0% versus 63.3%. Google's Gemini 3 Pro competes well on multilingual and visual reasoning but falls behind on the agentic categories where enterprise budgets are moving fastest.

But the real losers might be Anthropic's own premium tier. Brendan Falk, CEO of Hercules, said it plainly. "It has Opus 4.6 level accuracy, instruction following, and UI, all for a meaningfully lower cost." Opus still leads on the very hardest reasoning tasks: codebase refactoring, multi-agent coordination, problems where getting it exactly right justifies five times the cost. That's a real advantage. A shrinking one.

The test comes in three months, when the next Sonnet ships. If it closes even half the remaining gap, the Opus tier becomes a niche product. Premium pricing works only when the premium is real. Right now, for most enterprise work, it isn't.

And if you're buying enterprise software built on assumptions about what humans need to do manually, you should be watching these model releases more closely than earnings reports. The pricing spread collapsed. The capability spread is next.

Frequently Asked Questions

How does Sonnet 4.6 compare to Opus 4.6 on benchmarks?

On most enterprise benchmarks, Sonnet 4.6 matches or beats Opus 4.6. It scored higher on GDPval-AA (1633 vs 1606) and agentic financial analysis (63.3% vs 60.1%). On coding (SWE-bench), Opus leads by just 1.2 points. On computer use (OSWorld), the gap is 0.2 points. The $3 model effectively matches the $15 one.

Why would Anthropic undercut its own premium product?

Anthropic is running a volume strategy. Customers spending $100K+ annually grew sevenfold year over year. Million-dollar accounts went from 12 to over 500. At $3 per million tokens, enterprises deploy more agents, consume more tokens, and lock in deeper. The $380 billion valuation depends on infrastructure-scale adoption, not luxury margins.

What is computer use and why does it matter for enterprise software?

Computer use lets AI models interact with software visually, clicking buttons, filling forms, and navigating screens like a human. Most business software lacks APIs and was designed for manual operation. A model scoring 72.5% on OSWorld can handle these tasks without custom integrations, potentially replacing the middleware many SaaS companies sell.

What did Vending-Bench Arena reveal about Sonnet 4.6?

Vending-Bench Arena simulates autonomous businesses where AI models make investment and pricing decisions for a full year without human input. Sonnet 4.6 spent heavily for ten months building capacity, then pivoted to profitability, finishing at roughly $5,700 versus Sonnet 4.5's $2,100. It demonstrated genuine long-horizon planning capability.

Does Opus 4.6 still have advantages over Sonnet 4.6?

Yes, but they are narrowing. Opus leads on the hardest reasoning tasks including codebase refactoring, multi-agent coordination, and problems requiring maximum precision. On SWE-bench coding, Opus scores 80.8% to Sonnet's 79.6%. For work where accuracy justifies five times the cost, Opus remains the choice. For most enterprise work, the price gap no longer matches the performance gap.

[

SpaceX and xAI Enter Secret $100M Pentagon Contest for Autonomous Drone Swarms

SpaceX and its subsidiary xAI are competing in a secret Pentagon contest to build voice-controlled autonomous drone swarming technology, Bloomberg reported Sunday. The $100 million prize challenge, la

The Implicator

](https://www.implicator.ai/spacex-and-xai-enter-secret-100m-pentagon-contest-for-autonomous-drone-swarms/)

[

OpenAI Launches Frontier to Manage AI Agents From Rival Vendors in One System

OpenAI on Thursday released Frontier, a platform that lets companies build, deploy, and manage AI agents from multiple vendors inside a single system. The product works with agents built by OpenAI, by

The Implicator

](https://www.implicator.ai/openai-launches-frontier-to-manage-ai-agents-from-rival-vendors-in-one-system/)

[

Meta Won Its Antitrust Case While Losing the Talent War That Actually Matters

Federal judges declared Meta faces "fierce competition" on Tuesday. The company's AI researchers apparently agree. They're just choosing to work for it instead. Judge James Boasberg dismissed the FTC

The Implicator

](https://www.implicator.ai/meta-won-its-antitrust-case-while-losing-the-talent-war-that-actually-matters/)

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The Freak World of Nicholas J. Fuentes

1 Share
  • Early Career Trajectory: Nicholas J. Fuentes transitioned from a little-watched university freshman political talk show host in 2017 to a prominent figure with a cultish following known as "Groypers."
  • Methods of Gaining Notoriety: Fuentes cultivated controversy by embracing taboos, including praising Hitler and opposing interracial marriage, leading to media exposure on platforms like _The Tucker Carlson Show_.
  • Critique of Media Focus: Focusing solely on Fuentes's offensive remarks plays into his strategy, which is understood through the framework of Baudrillard's "hyperreality" as a cynical performance for attention.
  • Investigation's Focus Shift: This investigation prioritizes Fuentes's actions over his words, revealing negative consequences such as betrayal, pedophilia allegations, suicide, and murder within his political circle.
  • Groyper Loyalty and Oaths: Followers, such as Dalton Clodfelter, sometimes publicly swore an oath to "rape, kill, and die for Nicholas J. Fuentes," with departures from the movement leading to public disputes.
  • January 6 Involvement and Aftermath: Fuentes was present outside the U.S. Capitol on January 6, encouraging crowds forward, but did not enter; subsequently, he disavowed arrested Groypers who were charged in connection with the event.
  • Financial Operations and Controversies: Fuentes received a significant Bitcoin donation of approximately $250,000 from Laurent Bachelier, who committed suicide shortly after the transfer; later, he was accused by a former associate of falsely claiming his funds remained frozen by the DOJ in order to raise more money.
  • Association with Controversial Figures: Fuentes maintained ties with activist Ali Alexander following allegations of child sexual solicitation, reportedly downplayed the accusations, and allegedly pressured an accuser to recant.

Whatever else may be said about him, one fact cannot be denied: Nicholas J. Fuentes, the 27-year-old racialist influencer, is on the run of his life.

That’s a remarkable shift from 2017, when Fuentes was a little-known university freshman who hosted a little-watched political talk show. Over time, he developed a fanatical, cultlike following known as the “Groypers.” His digital following soon crossed over into the real world: first, with his involvement in the 2017 Unite the Right rally; then, with his long-running feud with conservative activist Charlie Kirk; and finally, with his role in “Stop the Steal,” which culminated outside the U.S. Capitol on January 6.

Fuentes has cultivated a reputation as the most provocative and controversial figure on the Right. He has done this by embracing taboos, praising Hitler, and opposing interracial marriage. In turn, he has ridden a wave of spectacle and outrage to new heights, with an appearance on The Tucker Carlson Show, and garnering frequent coverage in outlets such as the New York Times and The Atlantic.

Attempting to undercut his growing influence, the media and Fuentes’s right-leaning critics both tend to focus on Fuentes’s record of offensive remarks. Take Fuentes’s appearance on Piers Morgan Uncensored. During the two-hour interview, Morgan played a litany of clips from Fuentes’s talk show, highlighting bigoted comments he’s made over the years.

But this line of attack merely plays into Fuentes’s hands. As one of us has previously noted, Fuentes is best understood as an actor in what postmodern theorist Jean Baudrillard called “hyperreality.” Under conditions of hyperreality, symbols of past phenomena lose their meaning and circulate as hollowed-out images through the digital landscape, where they drive discourse and spark emotional reactions.

This is the framework through which Fuentes, with his professed admiration for Hitler and Stalin, and his embrace of anti-Semitism, should be understood. Above all, he is engaged in a performative demand for attention, cynically harnessing transgression to drive clicks, sow chaos, and gain notoriety.

By contrast, this City Journal investigation—which draws on livestreams, a review of public records, and interviews with key associates—focuses not on Fuentes’s words but on his actions. (Fuentes did not return a detailed request for comment for this article.) It looks beneath the spectacle of outrage and the self-mythology he has curated and reveals a shocking heap of human wreckage that has accumulated within Fuentes’s political universe: betrayal, pedophilia, suicide, murder.

Welcome to the freak world of Nicholas J. Fuentes.

Nicholas Joseph Fuentes was born in Chicago in 1998. He grew up in the suburb of La Grange Park. In high school, he served as student council president. After graduation, he briefly attended Boston University before dropping out. Fuentes then moved back to the Chicago area, where, every weeknight, he livestreams his talk show, America First with Nicholas J. Fuentes.

With a greenscreen backdrop, Fuentes sits behind a wooden desk, typically in a suit and tie, pontificating on political and cultural events. Frequently, he launches into expletive- and slur-laced rants. Fuentes presents himself as a dissident truthteller in a world of lies; the ringmaster of a revitalized Right; and a spokesman for young, patriotic men sold out by the elites.

The reality behind the scenes, however, is quite different from this public-facing mythology. The world that has surrounded Fuentes over the past decade is one of paranoia, intrigue, drama, and betrayals. Several of his associates have broken from his movement over the years, spilling dirt on his operation in the process, with allegations and counter-allegations following in the wake of their apostasy. At the same time, Fuentes has developed a fanatical, cultlike following, with many Groypers—at his insistence—swearing a public oath to “rape, kill, and die for Nicholas J. Fuentes.”

One of these young men is Dalton Clodfelter, a 25-year-old Fuentes follower who spoke to City Journal. Clodfelter says he first encountered Fuentes through viral video clips and eventually connected with him through an online group chat. There, he says, the two “hit it off.” At the time, Clodfelter was young, religious, and serving in the U.S. Army.

“I was pretty naïve at the time,” he recalled. “I wanted to be a commentator. I started out on radio when I was 16. I wanted to be a guy that talked and [Fuentes] was more than happy to help me along.”

In the years that followed, Clodfelter pursued a career as an online commentator, attempting to follow in Fuentes’s footsteps. Clodfelter posted openly extreme, racist, and anti-Semitic rants; he often used racial slurs. As a result, Clodfelter claims, mainstream platforms banned his accounts. Seeking an alternative, he moved over to cozy.tv, which Fuentes created to host his own show after being banned from other platforms.

“I was making really great money streaming,” Clodfelter said. “[Then] all of a sudden, like, these income sources are being taken away. . . . Now, I’ve got to figure out a new way to generate this income and supplement these things, and that really did suck on top of all the social media banning. The good thing was, Nick offered me a platform.”

Clodfelter’s association with Fuentes and his own language—including a pledge to “rape, kill, and die”—pushed him further and further from the real world. As he got deeper into the Groyper cult, Clodfelter underwent a period of dramatic disintegration: he was kicked out of the military and saw income streams dried up, and his first marriage ended in divorce, he claims; his online comments grew more extreme, erratic, and disturbed.

The end result was predictable. In a livestream, Clodfelter admitted he’d become unemployable. “I’m really untouchable,” he said. “I can’t go work for a congressman, that’s not going to happen. I can’t go get a job even at, like, a McDonald’s . . . . You look me up—look my name up . . . see what you find, and let me know if you’d hire me.”

Yet Clodfelter is unrepentant about his association with Fuentes and harbors no “resentment” for the personal cost he’s paid.

“I appreciate Nick and everything he’s done and what he’s built and everything he did for me,” he says. Fuentes might have led him down the path to personal ruin, but he was the one who took the final steps. “I decided to say the N-word. I decided to say I love Hitler . . . .  And now I’m here.”

In the grand scheme of things, Clodfelter might have gotten off easy. Other Groypers have ended up in prison.

On Jan. 6, 2021, Fuentes was in Washington, D.C., where he joined crowds contesting the 2020 presidential election results. Near the Capitol, Fuentes urged the crowd onward.

“Keep moving towards the Capitol!” he shouted into a bullhorn. “It appears we are taking the Capitol back from the police right now! Keep marching and don’t relent. Never relent! Break down the barriers and disregard the police!”

Numerous Groypers entered the Capitol that day. One rioter even carried an “America First” flag into the building. But after urging his followers toward the Capitol, Fuentes himself did not go inside. He was investigated but has not been arrested or charged in association with the day’s events.

This fact has raised eyebrows within far-right circles, particularly given the video evidence of him encouraging others to disobey the police. A former FBI agent, who spoke to City Journal on the condition of anonymity, said agents were instructed to “take down every person who was at January 6.” Fuentes, through his attorney, denied having been knowingly interviewed by federal law enforcement.

While Fuentes may have escaped the long arm of the law, other Groypers present that day weren’t so lucky. In the months that followed, at least seven were charged and sentenced for entering the Capitol on January 6. In the aftermath of their arrests, Fuentes disavowed the arrested Groypers, calling them “losers” and saying he did not “wish to be associated with” them. They may have pledged to “rape, kill, and die,” but in the Groyper movement, loyalty appears only to run in one direction.

One question has dogged Fuentes for years: How does he make his money and finance his political operation? During his livestreams, fans must pay a minimum of $20 to pose questions during his Q+A sessions. He has solicited donations and hawks America First merchandise and subscriptions to his $100-a-month private Telegram channel.

Fuentes’s largest known donor to date was a French computer programmer and cryptocurrency millionaire named Laurent Bachelier. In December 2020, as right-wing factions were organizing the “Stop the Steal” protest in Washington, D.C., Bachelier transferred roughly $522,000 in Bitcoin to a network of white nationalist and neo-Nazi activists. The largest recipient, by far, was Fuentes, who received roughly $250,000. Hours after initiating the Bitcoin transfers, Bachelier committed suicide at the age of 35.

Within three months, Bachelier’s donation to Fuentes had effectively doubled in value due to a significant spike in the price of Bitcoin.

After January 6, Fuentes’s finances appear to have been briefly imperiled. The Gray Zone, an investigative outlet edited by hard-left journalist Max Blumenthal, published a purported letter that the Department of Justice sent Fuentes, stating that the DOJ had frozen accounts belonging to Fuentes on January 23, 2021. By the summer of 2021, however, the purported letter claimed DOJ had withdrawn its request to freeze the account.

In spite of this, Fuentes allegedly continued to claim publicly that his funds remained frozen. That narrative unraveled when Jaden McNeil, a former close associate and the treasurer of Fuentes’s nonprofit, said on a livestream that Fuentes had regained access to the funds, a claim corroborated by the Gray Zone’s purported records.

“He’s had that [money] back from the feds for almost a year,” McNeil said on the livestream, accusing Fuentes of using the story to raise money while, at the same time, purchasing an expensive car. “He’s a total liar.”

Over the years, Fuentes has also sought money from at least one prominent right-wing benefactor, according to sources familiar with the meeting, who spoke on the condition of anonymity. One former associate claims that in January 2022, Fuentes was scouting real estate in Tampa Bay, where he was considering moving his political operation. During the trip, the associate claims, Fuentes drove four hours to Miami to meet a then-employee of Peter Thiel, the right-wing venture capitalist who has financially backed Republican causes, including the campaigns of President Donald Trump and then-Senate candidate J. D. Vance.

Armed with a presentation, Fuentes pitched Thiel’s employee on providing funding for his show and political operation, according to multiple sources familiar with the meeting who spoke on the condition of anonymity. Thiel, one source claimed, never replied to Fuentes’s request.

Soon after, Fuentes began launching a series of public attacks not only on Thiel, whom he called “the CIA,” but also on Trump and Vance. Fuentes also now bemoans the influence of Thiel funding in right-wing circles, despite having allegedly sought out the very Thiel funding he condemns. Multiple former Fuentes associates who spoke to City Journal claim that the Thiel story is telling, as it reveals Fuentes’s true motivations: not political grievances, but personal ones.

Conversations with multiple former associates paint a picture of Fuentes’s positions as not rooted in principle, but shifting with his perceptions of personal advantage and what he perceives as threats to his ego.

“[There’s an] extraordinary gap between the way he describes himself and presents himself to his followers . . . and his real life,” said former Fuentes ally Milo Yiannopoulos. “He’s somebody who doesn’t believe in anything at all.”

Perhaps the most sordid chapter in the Fuentes story involves his relationship with Ali Alexander, a political activist and key organizer of “Stop the Steal.” In 2023, Alexander was accused of asking teenage boys for sexual photographs.

Amid the fallout from the scandal, Fuentes initially “disavowed” Alexander’s actions, while lashing out at the accusers, saying the teenaged boys were guilty of “flirting” with Alexander. “The real victim in this entire saga is me,” Fuentes said.

He had also previously downplayed the allegations on his show. “If we’re talking about rape and murder and things like that, okay, but we’re talking about flirting?” Fuentes said. “Give me a f***ing break.”

Fuentes has long been aware of Alexander’s alleged proclivities. On a livestream in 2017, Fuentes said there was “abundant evidence” that Alexander “would like to interact in a sexual way with young fashy white boys quite like myself, and like others.” In a video posted by hard-right journalist Dustin Nemos—which purportedly depicts events from November 2020, during the so-called Million MAGA March—a young-looking male, whom Nemos describes as a teenager, recounts to Fuentes a conversation in which the alleged teenager called Alexander a “pedo.” Fuentes responded: “How do people not know about it?”

Despite this history, Fuentes continued to associate with Alexander, appearing alongside him at various political events, including one Fuentes encouraged his supporters to attend.  After the scandal broke, Fuentes reportedly pressured one of Alexander’s accusers to recant his allegation in exchange for securing him a job in politics. (Fuentes denied this at the time; he did not return a City Journal request for clarifying comment.) That year, Fuentes also acknowledged on a livestream that, during the Kanye West presidential campaign, he had remained in daily contact with Alexander despite the allegations. In 2024, a Colorado police department confirmed that it was working with “partner agencies” in Texas to investigate Alexander.

Alexander did not return a request for comment.

The Alexander scandal is not an aberration, but part of a broader culture within the Groyper movement. In 2022, Alejandro Richard Velasquez Gomez, a Fuentes fan who had been photographed with him at a political event, was charged with possession of child pornography and making interstate threats on a Turning Point USA event. In the aftermath of the arrest, Fuentes attempted to distance himself, saying Gomez had “nothing to do with me or America First.” 

Fuentes’s own comments on sexual topics are equally disturbing. He has stated that convicted sex offender Jeffrey Epstein was “cool as f***”; has expressed skepticism about age of consent laws; and indicated he wants to marry a 16-year-old girl when he turns 30. For the Groypers, the transgression of political and sexual taboos is part of Fuentes’s appeal. The more furious the denunciations of Fuentes—racist, sexist, pedophile, Nazi—the deeper his audience digs in. “These labels don’t mean anything,” says Clodfelter, the Fuentes follower. “Those words are meaningless to us now . . . . It doesn’t affect us anymore. And Nick proves that.”

From one perspective, we might be tempted to dismiss Fuentes as a hyperreal spectacle. His Groyper movement relates to politics the same way pornography relates to sex: stripped of meaning, flattened into an image, sensationalized for personal gain.

But hyperreality does not mean unreal, and, like pornography, it can often intrude on reality in ugly and unexpected ways. Conservatives who care about the future of the movement should understand that Fuentes is corrosive. Despite his self-mythology as the most right-wing pundit in America, he is, in a real sense, a tool of the Left. Progressive activists have spent the past decade pushing the narrative that conservatives are Nazis and Donald Trump is Hitler. In Fuentes, they have finally found their man.

The spectacle moreover is not only politically corrosive, but personally corrosive to its participants. At heart, the Fuentes phenomenon is not about ideas, or ideology, or power; it is about an angry, broken young man leading other angry, broken young men down a digital trail that ends in ruin.

Fuentes might laugh about his command for his followers to “rape, kill, and die for Nicholas J. Fuentes,” explaining it away as irony or a morbid joke. But it was only a matter of time before one of his followers, lost in the cult atmosphere of the Groyper movement and unable distinguish reality from hyperreality, would take it seriously.

That is precisely what happened last January, when a black teenager named Solomon Henderson walked into Antioch High School, outside Nashville, with a 9mm pistol and fired 10 rounds into the cafeteria, killing a 16-year-old girl and injuring another student, before turning the gun on himself.

The purported manifesto Henderson left behind—a copy of which found online corresponded to contemporary news descriptions—revealed a fascination with mass shooters, including the neo-Nazi Anders Breivik, who killed dozens of children in Norway in 2011; and a pastiche of radical ideologies lifted from internet forums and imageboards. At the logical level, these ideologies, especially when set against Henderson’s racial identity, seem to make little sense. But in the freak world, populated by a rainbow of nihilists, a black teenager, drunk on the hyperreal spectacle, can murder a young girl in a school cafeteria while calling himself an “Involuntary N[*]gger” and a “Groyper Incel.”

In the manifesto, Henderson asked himself, “Is there a particular groups [sic] or people that radicalized you the most?” He weaves one answer into the subsequent list of irony-laden names—once in plain lettering, and later, in all caps: NICK FUENTES.

Ryan Thorpe is an investigative reporter at the Manhattan Institute. Christopher F. Rufo is a senior fellow at the Manhattan Institute, a contributing editor of City Journal_, and the author of_ America’s Cultural Revolution.

Photo by Dominic Gwinn/Getty Images

Read the whole story
bogorad
16 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Pentagon Threatens Anthropic With Supply Chain Risk Label Over Military AI Limits

1 Share
  • Supply Chain Sanctions: The Pentagon is moving to designate Anthropic as a "supply chain risk" due to the company's refusal to grant unrestricted military access to its AI model, Claude.
  • Policy Deadlock: Negotiations over the renewal of the "Claude Gov" classified version have stalled as Anthropic seeks to prohibit use for autonomous weaponry and mass surveillance of American citizens.
  • Mandatory Divestment: A formal risk designation would require all defense contractors and their suppliers to remove Claude from their internal workflows and document analysis systems.
  • Venezuela Operation: Reports indicate that Claude was utilized through a Palantir partnership during a January military raid in Caracas, despite company terms prohibiting violent deployments.
  • Operational Demands: Defense officials are demanding an "all lawful purposes" standard for AI usage, arguing that ethical guardrails create unworkable gray areas for battlefield commanders.
  • Market Competition: Competitors including Google, OpenAI, and xAI have reportedly agreed to remove military guardrails on unclassified systems and are seeking access to classified networks.
  • Technical Superiority: Claude Gov currently remains the only AI model operating on the military's classified networks, as competing models are regarded by some officials as less advanced for intelligence analysis.
  • Economic Leverage: The Pentagon is using the supply chain designation as a tool to signal that domestic technology firms must prioritize military requirements over internal ethical frameworks to maintain government eligibility.

The Pentagon is preparing to designate Anthropic as a "supply chain risk" and sever business ties with the AI company over its refusal to allow unrestricted military use of Claude, Axios reported on Monday. Defense Secretary Pete Hegseth is "close" to making the decision after months of failed negotiations, a senior Pentagon official told the outlet. "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this," the official said.

Supply chain risk designations are typically reserved for foreign adversaries and hostile actors, the kind of label slapped on Huawei routers or Russian software vendors. Not American companies. Applying it to Anthropic, widely regarded as one of the country's leading AI developers, would represent an extraordinary act of retaliation by the Pentagon against a domestic technology firm.

The Breakdown

  • Pentagon preparing to label Anthropic a "supply chain risk," forcing all military contractors to drop Claude
  • Dispute centers on mass surveillance and autonomous weapons; Pentagon demands "all lawful purposes" standard
  • Claude reportedly used in Venezuela/Maduro raid via Palantir; Anthropic's terms prohibit violent deployment
  • Google, OpenAI, and xAI agreed to remove military guardrails but none yet operate on classified networks

What the fight is about

Anthropic and the Pentagon have spent months in contentious negotiations over the renewal of Claude Gov, the classified version of Claude built specifically for the U.S. national security apparatus. Two issues are holding up the deal. Anthropic wants to ensure Claude isn't used for mass surveillance of American citizens or to develop weapons that fire without human involvement. The Pentagon is demanding the right to use Claude for "all lawful purposes," a standard it is also pressing on Google, OpenAI, and Elon Musk's xAI.

"The Department of War's relationship with Anthropic is being reviewed," chief Pentagon spokesperson Sean Parnell told The Hill on Monday. "Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The company's position is that current U.S. surveillance law was never written with AI in mind. The government already collects enormous quantities of personal data, from social media posts to concealed carry permits. AI could supercharge that authority in ways existing statutes don't contemplate. Think pattern-matching across millions of civilian records, cross-referencing behavior at a scale no human analyst corps could replicate. Anthropic has said it is prepared to loosen its current terms but wants explicit boundaries.

Pentagon officials counter that the restrictions are too broad and would create unworkable gray areas on the battlefield. Defense officials have argued that drawing bright lines around specific capabilities would hamper operations in scenarios nobody can fully predict yet, a concern that Anthropic's critics inside the Pentagon call naive at best and obstructionist at worst.

The relationship appears to have soured beyond the specifics of the contract. A source familiar with the negotiations told Axios that senior defense officials had been frustrated with Anthropic for some time and "embraced the opportunity to pick a public fight."

The Venezuela trigger

The Wall Street Journal reported last week that Claude was used during the U.S. military operation in January that captured Venezuelan President Nicolas Maduro from his residence in Caracas. The AI tool reached the battlefield through Anthropic's partnership with Palantir Technologies, which holds extensive military and intelligence contracts. That revelation turned a simmering contract dispute into something more volatile.

Venezuelan authorities said 83 people were killed in bombings across the capital during the raid. Anthropic's own terms of use prohibit Claude's deployment for violent purposes, weapons development, or surveillance. Claude Gov has capabilities ranging from processing classified documents and intelligence analysis to cybersecurity data interpretation, and it was unclear which of those functions the military called on during the operation.

Anthropic declined to comment on whether its technology played a role but said any use would need to comply with its usage policies. Palantir and the Pentagon declined to comment.

Reports of the raid prompted Anthropic to inquire about whether its technology had been involved, according to The Hill, though the company denied making any outreach to the Pentagon or Palantir about the incident. The question itself was enough to deepen the rift. Here was a company that had staked its brand on responsible development, now staring at reports that its technology may have supported a military operation that killed dozens of people. Nobody confirmed anything. Nobody denied it either. And somewhere in Arlington, the contract renewal was still sitting on a desk.

What supply chain risk actually means

If the Pentagon goes through with the designation, the consequences reach far beyond Anthropic's military contract. Every company that does business with the Pentagon would need to certify that it does not use Claude in its own workflows. That's the kind of compliance burden normally imposed over Chinese telecom equipment or adversary-state technology. Not a San Francisco AI startup.

Losing the military deal itself wouldn't cripple Anthropic. The two-year contract, announced last July, is worth up to $200 million, a fraction of the company's reported $14 billion in annual revenue. The real damage is the ripple.

Stay ahead of the curve

Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.

No spam. Unsubscribe anytime.

Eight of the ten largest U.S. companies use Claude, according to Anthropic. The technology sits inside enterprise systems across finance, healthcare, legal services, and government, running on laptops in procurement offices and servers in contractor data centers. Defense contractors and their suppliers who rely on Claude for document analysis, coding assistance, or internal operations would all need to strip it out or prove they'd stopped. That's the "enormous pain in the ass" the Pentagon official was describing, and it lands on the companies that depend on Anthropic's tools, not just on Anthropic itself.

Then there's the market signal. If you're an enterprise buyer evaluating AI vendors, the designation forces a new question into the procurement conversation: does choosing Claude put future government contracts at risk? That kind of uncertainty doesn't hit revenue immediately. It shows up in renewal conversations, in procurement meetings where a general counsel raises a flag, in RFP language that quietly specifies "Pentagon-compatible AI providers." The Pentagon knows this. A $200 million contract is a bargaining chip. A supply chain designation is a weapon.

Competitors already in the room

Pentagon officials have made clear they have alternatives. Google, OpenAI, and xAI have all agreed to remove their guardrails for military use on unclassified systems, Axios reported. All three are negotiating access to classified military networks. Pentagon officials expressed confidence that the companies would accept the "all lawful purposes" standard.

But swapping out Anthropic would not be painless. A senior administration official acknowledged that competing models "are just behind" for specialized government applications. Claude Gov was purpose-built for handling classified materials, interpreting intelligence, and processing cybersecurity data. It remains the only AI model running on the military's classified networks. OpenAI has built a custom GenAI.mil tool for the Pentagon and other allied nations. Google already provides a customized version of Gemini for defense research. xAI, owned by Musk, signed a Pentagon agreement in January. None of them operate on classified systems yet.

Officials who have used Claude Gov rate it highly. No complaints about the tool's capability during the Maduro operation or anywhere else. The grievance is entirely about terms, not technology. Which makes the threatened divorce harder to read at face value: the Pentagon wants to banish the product it praises most.

Precedent matters as much as the switch itself. How Anthropic's negotiations resolve will shape the contract terms for every AI company seeking classified military work. OpenAI, Google, and xAI are watching. A source familiar with those discussions told Axios that "much is still undecided" despite the Pentagon's confidence, suggesting the three companies haven't fully committed to whatever the Pentagon is asking behind closed doors.

A loyalty test dressed as policy

Anthropic has built its identity around the idea that powerful AI requires guardrails. CEO Dario Amodei has called publicly for regulation to prevent catastrophic harms from AI. He has warned against autonomous lethal operations and mass surveillance on U.S. soil. The company raised billions in funding partly on the premise that responsible development and commercial success are compatible goals. Investors bought that vision. Enterprise customers bought it. The Pentagon, for a while, bought it too.

Hegseth tested that premise in January when he told reporters the Pentagon wouldn't "employ AI models that won't allow you to fight wars." That framing collapses Anthropic's specific objections, on surveillance law, on weapons autonomy, into something blunter: a refusal to cooperate. It turns a policy dispute into a question of allegiance. Other AI companies appear to have taken note. Google employees staged walkouts over a Pentagon drone contract called Project Maven back in 2018. That was a different era. Google is now negotiating classified network access for Gemini, no Anthropic-style restrictions attached.

Anthropic hasn't walked away from the table. A spokesperson told The Hill on Monday that Anthropic remained "committed" to supporting U.S. national security and described the ongoing conversations as "productive" and conducted "in good faith." The company pointed out it was the first frontier AI developer to put models on classified networks and the first to build customized models for national security customers.

Good faith has limited currency when the other side is telling reporters you'll "pay a price." Pentagon officials aren't characterizing this as a disagreement between reasonable parties hashing out contract language. They're treating Anthropic's ethical boundaries like a contamination risk, something to be flagged and cut out of the supply chain entirely.

Anthropic built the most advanced classified AI system the Pentagon has ever deployed. That same technology reportedly played a role in an active military raid. Now the company faces classification alongside hostile foreign suppliers, not because the technology failed, but because its maker wanted a say in how it gets used. The contamination the Pentagon identified isn't in the code. It's in the conditions.

Frequently Asked Questions

What is a supply chain risk designation?

A Pentagon classification that bars a company's products from the U.S. military supply chain. It's typically reserved for foreign adversaries like Chinese telecom firms. If applied to Anthropic, every defense contractor would need to certify it doesn't use Claude, potentially disrupting operations at companies across finance, healthcare, and government.

What is Claude Gov?

A specialized version of Anthropic's Claude AI built for the U.S. national security apparatus. It handles classified materials, interprets intelligence, and processes cybersecurity data. It is currently the only AI model running on the military's classified networks, giving Anthropic a unique position among AI companies serving the Pentagon.

How was Claude used in the Venezuela operation?

The Wall Street Journal reported that Claude was deployed during the January military operation that captured Venezuelan President Nicolas Maduro, accessed through Anthropic's partnership with Palantir Technologies. The specific role Claude played remains unclear. Anthropic declined to confirm involvement, and both Palantir and the Pentagon declined to comment.

What restrictions is Anthropic insisting on?

Anthropic wants contract language preventing Claude from being used for mass surveillance of American citizens or to develop weapons that fire without human involvement. The company argues current surveillance law doesn't account for AI's ability to cross-reference civilian data at massive scale. The Pentagon considers these restrictions too broad for military operations.

Are other AI companies willing to work with the Pentagon without restrictions?

Google, OpenAI, and xAI have agreed to remove guardrails for military use on unclassified systems and are negotiating classified network access. Pentagon officials say all three would accept an "all lawful purposes" standard. However, a senior administration official acknowledged competing models are "just behind" Claude Gov for specialized government applications.

[

SpaceX and xAI Enter Secret $100M Pentagon Contest for Autonomous Drone Swarms

SpaceX and its subsidiary xAI are competing in a secret Pentagon contest to build voice-controlled autonomous drone swarming technology, Bloomberg reported Sunday. The $100 million prize challenge, la

The Implicator

](https://www.implicator.ai/spacex-and-xai-enter-secret-100m-pentagon-contest-for-autonomous-drone-swarms/)

[

Europe's Trade Bazooka. Silicon Valley Holds Its Breath.

Thirty-six soldiers. That's what European nations sent to Greenland last week in a gesture of solidarity with Denmark. A training exercise, they called it. President Trump saw provocation. On Saturday

The Implicator

](https://www.implicator.ai/europes-trade-bazooka-silicon-valley-holds-its-breath/)

[

Europe Aims at Silicon Valley. The IMF Sees 2001.

San Francisco | January 20, 2026 Trump's Greenland tariffs just made the EU reach for its "economic nuclear weapon." The Anti-Coercion Instrument can revoke business licenses, ban government contract

The Implicator

](https://www.implicator.ai/europe-aims-at-silicon-valley-the-imf-sees-2001/)

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories