Strategic Initiatives
12297 stories
·
45 followers

Friends Don't Let Friends Use Ollama | Sleeping Robots

1 Share

LLM (google/gemini-3.1-flash-lite-20260507) summary:

  • Unearned Popularity: the project achieved its market position by being a first-mover wrapper for llama.cpp rather than through technical innovation.
  • Attribution Evasion: the maintainers intentionally obscured their reliance on upstream technology by failing to provide required license notices for over a year.
  • Engine Inferiority: attempts to build a proprietary inference backend resulted in significant performance regression and compatibility bugs compared to the original engine.
  • Deceptive Branding: the platform deliberately mislabels smaller distilled models as full-scale releases to artificially inflate download metrics.
  • Questionable Openness: the release of a closed-source binary and the subsequent silence on licensing concerns deviate from the project's foundation in open-source principles.
  • Restrictive Workflows: user-defined configuration files unnecessarily duplicate existing metadata and force bloated storage patterns that hinder efficiency.
  • Cloud Encroachment: the shift toward hosted services compromises the initial mission of private local inference by routing data through external entities.
  • Venture Capital Incentives: strategic decisions prioritize investor-friendly lock-in metrics and proprietary packaging over community health and ecosystem contribution.

Ollama is the most popular way to run local LLMs. It shouldn’t be. It gained that position by being first, the first tool that made llama.cpp accessible to people who didn’t want to compile C++ or write their own server configs. That was a real contribution, briefly. But the project has since spent years systematically obscuring where its actual technology comes from, misleading users about what they’re running, and drifting from the local-first mission that earned it trust in the first place. All while taking venture capital money.

This isn’t a “both sides” piece. I’ve used Ollama. I’ve moved on. Here’s why you should too.

#A llama.cpp Wrapper With Amnesia

Ollama’s entire inference capability comes from llama.cpp, the C++ inference engine created by Georgi Gerganov in March 2023. Gerganov’s project is what made it possible to run LLaMA models on consumer laptops at all, he hacked together the first version in an evening, and it kicked off the entire local LLM movement. Today llama.cpp has over 100,000 stars on GitHub, 450+ contributors, and is the foundation that nearly every GGUF-based tool depends on.

Ollama was founded in 2021 by Jeffrey Morgan and Michael Chiang, both previously behind Kitematic, a Docker GUI that was acquired by Docker Inc. They went through Y Combinator’s Winter 2021 batch, raised pre-seed funding, and launched publicly in 2023. From day one, the pitch was “Docker for LLMs”, a convenient wrapper that downloads and runs models with a single command. Under the hood, it was llama.cpp doing all the work.

For over a year, Ollama’s README contained no mention of llama.cpp. Not in the README, not on the website, not in their marketing materials. The project’s binary distributions didn’t include the required MIT license notice for the llama.cpp code they were shipping. This isn’t a matter of open-source etiquette, the MIT license has exactly one major requirement: include the copyright notice. Ollama didn’t.

The community noticed. GitHub issue #3185 was opened in early 2024 requesting license compliance. It went over 400 days without a response from maintainers. When issue #3697 was opened in April 2024 specifically requesting llama.cpp acknowledgment, community PR #3700 followed within hours. Ollama’s co-founder Michael Chiang eventually added a single line to the bottom of the README: “llama.cpp project founded by Georgi Gerganov.”

The response to the PR was revealing. Ollama’s team wrote: “We spend a large chunk of time fixing and patching it up to ensure a smooth experience for Ollama users… Overtime, we will be transitioning to more systematically built engines.” Translation: we’re not going to give llama.cpp prominent credit, and we plan to distance ourselves from it anyway.

As one Hacker News commenter put it: “I’m continually puzzled by their approach, it’s such self-inflicted negative PR. Building on llama is perfectly valid and they’re adding value on ease of use here. Just give the llama team proper credit.” Another: “The fact that Ollama has been downplaying their reliance on llama.cpp has been known in the local LLM community for a long time.”

#The Fork That Made Things Worse

In mid-2025, Ollama followed through on that distancing. They moved away from using llama.cpp as their inference backend and built a custom implementation directly on top of ggml, the lower-level tensor library that llama.cpp itself uses. Their stated reason was stability, llama.cpp moves fast and breaks things, and Ollama’s enterprise partners need reliability.

The result was the opposite. Ollama’s custom backend reintroduced bugs that llama.cpp had solved years ago. Community members flagged broken structured output support, vision model failures, and GGML assertion crashes across multiple versions. Models that worked fine in upstream llama.cpp failed in Ollama, including new releases like GPT-OSS 20B, where Ollama’s implementation lacked support for tensor types that the model required. Georgi Gerganov himself identified that Ollama had forked and made bad changes to GGML.

The irony is thick. They downplayed their dependence on llama.cpp for years, then when they finally tried to go it alone, they produced an inferior version of the thing they refused to credit.

Benchmarks tell the story. Multiple community tests show llama.cpp running 1.8x faster than Ollama on the same hardware with the same model, 161 tokens per second versus 89. On CPU, the gap is 30-50%. A recent comparison on Qwen-3 Coder 32B showed ~70% higher throughput with llama.cpp. The performance overhead comes from Ollama’s daemon layer, poor GPU offloading heuristics, and a vendored backend that trails upstream.

#Misleading Model Naming

When DeepSeek released its R1 model family in January 2025, Ollama listed the smaller distilled versions, models like DeepSeek-R1-Distill-Qwen-32B, which are fine-tuned Qwen and Llama models, not the actual 671-billion-parameter R1, simply as “DeepSeek-R1” in their library and CLI. Running ollama run deepseek-r1 pulls an 8B Qwen-derived distillate that behaves nothing like the real model.

This wasn’t an oversight. DeepSeek themselves named these models with the “R1-Distill” prefix. Hugging Face listed them correctly. Ollama stripped the distinction. The result was a flood of social media posts from people claiming they were running “DeepSeek-R1” on consumer hardware, followed by confusion about why it performed poorly, doing reputational damage to DeepSeek in the process.

GitHub issues #8557 and #8698 requested separation of the models. Both were closed as duplicates with no fix. As of today, ollama run deepseek-r1 still launches a tiny distilled model. Ollama knew the difference and chose to obscure it, presumably because “DeepSeek-R1” drives more downloads than “DeepSeek-R1-Distill-Qwen-32B” does.

#The Closed-Source App

In July 2025, Ollama released a GUI desktop app for macOS and Windows. The app was developed in a private repository (github.com/ollama/app), shipped without a license, and the source code wasn’t publicly available. For a project that had built its reputation on being open-source, this was a jarring move.

Community members immediately raised concerns. The license issue received 40 upvotes. Developers found potential AGPL-3.0 dependencies in the binary. The website placed the download button next to a GitHub link, giving the impression users were downloading the MIT-licensed open-source tool when they were actually getting an unlicensed closed-source application. Maintainers were silent for months. The code was eventually merged into the main repo in November 2025, but the initial rollout revealed where the project’s instincts lie.

As XDA put it: “If your project trades on being open source, you do not get to be vague about what is and is not open at launch.”

#The Modelfile: Reinventing a Solved Problem

GGUF, the model format created by Georgi Gerganov, was designed with one core principle: single-file deployment. Bullet point #1 in the GGUF spec reads: “Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.” Chat templates, stop tokens, model metadata, it’s all embedded in the file. You point llama.cpp at a GGUF and it works.

Ollama added the Modelfile on top of this. It’s a separate configuration file, inspired by Dockerfiles, naturally, that specifies the base model, chat template, system prompt, sampling parameters, and stop tokens. Most of this information already exists inside the GGUF file. As one Hacker News commenter put it: “We literally just got rid of that multi-file chaos only for Ollama to add it back.”

The problems with this approach compound quickly. Ollama only auto-detects chat templates it already knows about from a hardcoded list. If a GGUF file has a valid Jinja chat template embedded in its metadata but it doesn’t match one of Ollama’s known templates, Ollama falls back to a bare {{ .Prompt }} template, silently breaking the model’s instruction format. The user has to manually extract the chat template from the GGUF, translate it into Go template syntax (which is different from Jinja), and write it into a Modelfile. Meanwhile, llama.cpp reads the embedded template and just uses it.

Modifying parameters is worse. If you want to change the temperature or system prompt on a model you pulled from Ollama’s registry, the workflow is: export the Modelfile with ollama show --modelfile, edit it, then run ollama create to build a new model entry. Users have reported that this process copies the entire model, 30 to 60 GB, to change one parameter. As one user described it: “The ‘modelfile’ workflow is a pain in the booty. It’s a dogwater pattern and I hate it. Some of these models are 30 to 60GB and copying the entire thing to change one parameter is just dumb.”

Compare this to llama.cpp, where parameters are command-line flags. Want a different temperature? Pass --temp 0.7. Different system prompt? Pass it in the API request. No files to create, no gigabytes to copy, no proprietary format to learn.

The Modelfile also locks users into Ollama’s Go template syntax, which is a different language from the Jinja templates that model creators actually publish. LM Studio accepts Jinja templates directly. llama.cpp reads them from the GGUF. Only Ollama requires you to translate between template languages, and gets it wrong often enough that entire GitHub issues are dedicated to mismatched templates between Ollama’s library and the upstream GGUF metadata.

#The Registry Bottleneck

When a new model drops, say a new Qwen, Gemma, or DeepSeek variant, GGUFs typically appear on Hugging Face within hours, quantized by community members like Unsloth or Bartowski. With llama.cpp, you can run them immediately: llama-server -hf unsloth/Qwen3.5-35B-A3B-GGUF:Q4_K_M. One command, straight from Hugging Face, no intermediary.

With Ollama, you wait. Someone at Ollama has to package the model for their registry, choose which quantizations to offer (typically just Q4_K_M and Q8_0, no Q5, Q6, or IQ quants), convert the chat template to Go format, and push it. Until then, the model doesn’t exist in Ollama’s world unless you do the Modelfile dance yourself.

This creates a recurring pattern on r/LocalLLaMA: new model launches, people try it through Ollama, it’s broken or slow or has botched chat templates, and the model gets blamed instead of the runtime. A recent PSA post titled “If you want to test new models, use llama.cpp/transformers/vLLM/SGLang” documented how Qwen models showed problems with tool calls and garbage responses that “only happen with Ollama” due to their vendored backend and broken template handling. As one commenter put it: “Friends don’t let friends use ollama.”

The quantization limitation is particularly frustrating. Ollama only supports creating Q4_K_S, Q4_K_M, Q8_0, F16, and F32 quantizations. If you need Q5_K_M, Q6_K, or any IQ quant, formats that llama.cpp has supported for years, you’re out of luck unless you do the quantization yourself outside of Ollama. When a user asked about Q2_K support, the response was effectively “use a different tool.” For a project that markets itself as the easy way to run models, telling users to go elsewhere for basic quantization options is telling.

Hugging Face eventually added support for ollama run hf.co/{repo}:{quant} by generating a Docker-style manifest on the fly, which partially addresses the availability problem. But even then, the file gets copied into Ollama’s hashed blob storage, you still can’t share the GGUF with other tools, and the template detection issues still apply. The fundamental architecture remains: Ollama inserts itself as a middleman between you and your models, and that middleman is slower, less capable, and less compatible than the tools it sits on top of.

#The Cloud Pivot

In late 2025, Ollama introduced cloud-hosted models alongside its local library. The tool that was synonymous with local, private inference started routing prompts to third-party cloud providers. Proprietary models like MiniMax appeared in the model list without clear disclosure that selecting them would send your data off-machine.

Users raised concerns about data routing, when you run a closed-source model like MiniMax-m2.7 through “Ollama Cloud,” your prompts may be forwarded to the external provider who actually hosts the model. Ollama’s own documentation says “we process your prompts and responses to provide the service but do not store or log that content,” but says nothing about what the third-party provider does with it. For models hosted by Alibaba Cloud, users noted there is no zero-data-retention guarantee.

This was compounded by CVE-2025-51471, a token exfiltration vulnerability that affects all Ollama versions. A malicious registry server can trick Ollama into sending its authentication token to an attacker-controlled endpoint during a normal model pull. The fix exists as a PR but took months to land. In a tool that built its brand on local privacy, a vulnerability that leaks credentials to arbitrary servers is not a minor issue, it’s an architectural philosophy problem.

#The VC Pattern

All of this makes more sense when you look at the incentive structure. Ollama is a Y Combinator-backed (W21) startup, founded by engineers who previously built a Docker GUI that was acquired by Docker Inc. The playbook is familiar: wrap an existing open-source project in a user-friendly interface, build a user base, raise money, then figure out monetization.

The progression follows the pattern cleanly:

  1. Launch on open source, build on llama.cpp, gain community trust
  2. Minimize attribution, make the product look self-sufficient to investors
  3. Create lock-in, proprietary model registry format, hashed filenames that don’t work with other tools
  4. Launch closed-source components, the GUI app
  5. Add cloud services, the monetization vector

The model registry is worth examining. Ollama stores downloaded models using hashed filenames in its own format. If you’ve been pulling models through Ollama for months, you can’t just point llama.cpp or LM Studio at those files without extra work. You can bring your own GGUFs to Ollama via a Modelfile, but it’s deliberately friction-filled to take them out. This is a form of vendor lock-in that most users don’t notice until they try to leave.

#What To Use Instead

The tools Ollama wraps are directly accessible, and they’re not much harder to set up.

llama.cpp is the engine. It has an OpenAI-compatible API server (llama-server), a built-in web UI, full control over context windows and sampling parameters, and consistently better throughput than Ollama. In February 2026, Gerganov’s ggml.ai joined Hugging Face to ensure the long-term sustainability of the project. It’s truly community-driven, MIT-licensed, and under active development with 450+ contributors.

Mozilla’s llamafile takes the single-file idea further, it packages a model and the runtime into one executable that runs on six OSes with no install at all. Download, double-click, done.

llama-swap handles multi-model orchestration, loading, unloading, and hot-swapping models on demand behind a single API endpoint. Pair it with LiteLLM and you get a unified OpenAI-compatible proxy that routes across multiple backends with proper model aliasing.

If you want a desktop GUI, you have genuinely open-source options. Jan (AGPLv3) is a local-first chat app with a clean interface and full source code. koboldcpp (AGPL) is a llama.cpp fork with a built-in web UI and extensive configuration, fully open, fully auditable. Both are real FOSS projects, not wrappers hiding proprietary code behind an open-source engine.

Then there are the closed-source wrappers. LM Studio is proprietary software built on top of llama.cpp, and it’s the most popular alternative to Ollama for good reason: it offers the same one-click convenience with a proper GUI, accepts any GGUF, and exposes all the knobs. Crucially, LM Studio’s developers have acted in good faith toward the ecosystem. They maintain a proper acknowledgements page crediting llama.cpp and its license, and they don’t try to obscure what’s under the hood. It’s a closed-source product, but it’s not a parasitic one. Msty is another closed-source GUI sitting on top of open-source inference, with multi-model support and built-in RAG.

I’m not opposed to someone building a business by making FOSS convenient. That’s legitimate. But it’s worth being honest about what these tools are: commercial products that exist because of llama.cpp, not open-source alternatives to Ollama. The difference between a good-faith wrapper and a bad-faith one isn’t whether they charge money or ship proprietary code, it’s whether they respect the work they stand on. LM Studio does. Ollama didn’t.

Red Hat’s ramalama is worth a look too, a container-native model runner that explicitly credits its upstream dependencies front and center. Exactly what Ollama should have done from the start.

None of these tools require more than a few minutes to set up. The idea that Ollama is the only accessible option hasn’t been true for a long time.

#The Bigger Picture

Georgi Gerganov hacked together llama.cpp in an evening in March 2023 and kicked off a revolution in local AI. He and a community of hundreds of contributors have spent years making it possible to run increasingly powerful models on consumer hardware. That work is genuinely important, it’s the foundation that keeps local inference open and accessible.

Ollama wrapped that work in a nice CLI, raised VC money on the back of it, spent over a year refusing to credit it, forked it badly, shipped a closed-source app alongside it, and then pivoted the whole thing toward cloud services. At every decision point where they could have been good open-source citizens, they chose the path that made them look more self-sufficient to investors.

The local LLM ecosystem doesn’t need Ollama. It needs llama.cpp. The rest is packaging, and better packaging already exists.

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Secret document reveals Russia’s plans to aid Iran

1 Share

LLM (google/gemini-3.1-flash-lite-20260507) summary:

  • Document Origin: an unverified ten page proposal allegedly drafted by russian intelligence for iranian use.
  • Proposed Equipment: a theoretical transfer of 5,000 fibre optic drones alongside unidentified satellite guided weaponry.
  • Strategic Objective: an ambitious attempt to disrupt american maritime and ground operations within the persian gulf.
  • Tactical Advantage: utilizing wire guided technology to bypass existing radio frequency jamming measures.
  • Recruitment Plan: finding potential drone pilots among iranian students in russia and various regional syrian proxies.
  • Infrastructure Vulnerabilities: specific focus on targeting slow moving american landing craft near the strait of hormuz.
  • Lack Of Confirmation: total absence of proof that the plan was ever delivered or effectively implemented.
  • Geopolitical Speculation: questionable reliance on circumventing starlink restrictions in regions outside of the ongoing ukrainian conflict.

Listen to this story
AI Narrated

Your browser does not support the <audio> element.

THERE ARE many reasons why America’s war on Iran has been failing. One of them is the effectiveness of Iranian drones. Now a confidential document obtained by The Economist from a trusted source suggests that Russia has offered to provide Iran with unjammable drones and training on how to use them against American troops in the Gulf and perhaps elsewhere.

Until now, Vladimir Putin’s government is thought to have provided intelligence that enabled Iran to target American forces in the Middle East. This is the first evidence that it may also have offered to supply innovative weapons in large enough numbers to inflict many casualties on American and allied forces, we can exclusively report.

The secret plan involves Russia providing Iran with 5,000 short-range fibre-optic drones of the sort used in the war in Ukraine, an unknown number of longer-range satellite-guided drones, and training to use both sorts. It is contained in a ten-page proposal prepared by the GRU, the intelligence arm of Russia’s armed forces, for presentation to Iran. We have been able to examine the ten-page proposal, which contains six diagrams and a map depicting islands off the coast of Iran.

Though the document we saw was undated, we estimate that it was drafted within the first six weeks of the war, when there appeared to be a real chance of President Donald Trump ordering ground troops to attack Iranian territory, potentially to seize Kharg Island, an important oil terminal. We do not have direct evidence to confirm that the document was passed to the Iranians, whether any of the drones reached Iran, or if the promised training programme has begun.

Map: The Economist

Regional intelligence sources briefed on the plan said they considered it plausible, but were unable to independently corroborate it. Christo Grozev, an expert on Russia’s intelligence services, says the proposal is consistent with other evidence that the GRU is looking for ways of increasing Russian support for Iran during its war with America and Israel. And it fits with evidence emerging across the region of closer military co-operation between Russia and Iran.

In late March, for instance, Western intelligence officials said that Russia was preparing to send Iran its own upgraded versions of the long-range Shahed-type drones that it initially bought from Iran in 2022 and started producing in 2023. The Russian versions can better evade air defences and carry heavier payloads, but do not represent a step-change in capability.

Fibre-optic drones, by contrast, have transformed the battlefield in Ukraine by creating large “grey zones” in which vehicles and soldiers in the open are attacked remorselessly. Instead of being guided using radio signals, which can be jammed, operators control them through thin wires that spool out behind them. Operators can use them to conduct pin-point attacks at ranges of over 40km.

An FPV drone.Photograph: Alexander Polegenko/TASS via ZUMA Press/Eyevine

Such fibre-optic drones have recently surfaced in Lebanon, where they have been used by Hizbullah, an Iranian proxy, to attack Israeli forces. Israeli officials confirm these have been supplied by the Islamic Revolutionary Guard Corps, Iran’s most powerful military force, but were unwilling to say whether they were originally from Russia.

Fibre-optic drones emerged in the war in Ukraine in 2024 as a way of countering the jammers that both sides used to defeat radio-controlled drones. Russia used them to devastating effect the following year after mass-producing them. Although less manoeuvrable than their wireless counterparts, they transmit sharper video imagery and give out no radio signals that an enemy could use to locate and attack the operator.

The second part of the secret Russian plan is the provision to Iran of long-range satellite-guided drones equipped with Starlink terminals. Russia had used these to locate and either evade or attack Ukrainian air defences. They were highly effective against Ukrainian logistics, even when operating well beyond the frontlines. In 2026, however, Elon Musk denied Russia’s armed forces access to Starlink by blocking all terminals operating in Ukraine except for those on a “white list” approved by Ukraine’s government. The Russian proposal suggests these drones could instead be diverted and used in the Middle East, which has no such restrictions. Though it speculates that Starlink connectivity there would also be shut off in time, they could still inflict “disorder” on American forces in the interim.

The third element of the plan is training. The document proposes recruiting drone operators from among an estimated 10,000 Iranian students studying in Russian universities. Other communities that could potentially be tapped are Tajiks, who speak both Russian and a version of Persian, and the Alawite minority in Syria, loyal to the ousted regime of Bashar al-Assad. All would be screened for loyalty and against religious extremism, the proposal suggests.

The text of the GRU report suggests that it was written at a time when the main threat facing Iran was an American amphibious assault to open the Strait of Hormuz or to seize Kharg Island. It notes that American landing craft would be particularly vulnerable to drone attack, because of their slow speed. A diagram illustrates how Russian-trained Iranian drone operators could attack a landing flotilla by launching swarms of five or six drones from hidden positions some 15-30km away. Although it now seems very unlikely that America will try to land troops in Iran, the prospect of this concerned Russian and Iranian officials earlier in the war.

The GRU document notes that Russia is heavily committed in the fifth year of its “special military operation” in Ukraine. This would limit the resources it can allocate to helping Iran. The proposal also points out that Russia would be taking political and military risks by becoming more involved in the war in Iran. But limited assistance would complicate any American operation. It would also remain deniable, the document suggests, which would avoid dragging Russia into open conflict with America. 

To stay on top of the biggest European stories, sign up to Café Europa, our weekly subscriber-only newsletter.

Read the whole story
bogorad
12 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Will Trump threaten Spain's sovereignty over Ceuta and Melilla? // Whether it be US Congressional budget reports, Israeli newspaper columns or shifting geopolitics, there are some suggesting that Spain's sovereignty claims on its two autonomous cities in North Africa could be coming under threat.

1 Comment
  • Legislative footnote: A US Congressional appropriations report explicitly categorizes Ceuta and Melilla as being situated within Moroccan territory while remaining under Spanish administration.
  • Diplomatic objective: The House committee has formally advocated for the Secretary of State to facilitate a negotiated diplomatic compromise regarding the future status of these territories.
  • Historical precedent: Spain has maintained control of Melilla since 1497 and Ceuta since 1668, with both cities serving as the only European land borders on the African continent.
  • Strategic shifts: Observers link the potential for changing sovereignty claims to recent diplomatic friction between the current Spanish government and the United States.
  • Security and defense: Concerns regarding Spanish sovereignty have been amplified by Spain's failure to meet NATO defense spending targets and its refusal to grant base access during recent regional campaigns.
  • Geopolitical trends: The United States has strengthened its relationship with Morocco, evidenced by the recognition of Moroccan sovereignty over Western Sahara and the alignment of interests within the Abraham Accords framework.

A single paragraph hidden in the footnotes of a US Congressional bill has sparked debate over Morocco’s claim to sovereignty over Ceuta and Melilla, Spain's autonomous cities in North Africa.

A report from the House of Representatives influential Appropriations Committee clearly describes both North African enclaves as being “in Moroccan territory” but “under Spanish administration”, whilst encouraging Secretary of State Marco Rubio to consider the future status of both territories. 

This comes as a column in an Israeli newspaper has claimed that shifting geopolitics and deteriorating diplomatic relations between Madrid and Washington could make Moroccan claims on Ceuta and Melilla stronger.

READ ALSO: Why are Ceuta and Melilla Spanish?

The column has been picked up and reported in the Spanish press, with online outlet 20 Minutos running the headline: "Should Spain be concerned about Ceuta and Melilla? The Morocco-US-Israel axis looms over the waters of the Strait".

The Spanish territories of Ceuta and Melilla are Europe’s only landmasses in continental Africa. Melilla first fell under Spanish rule in 1497, and Ceuta, which was a Portuguese territory from 1415, was given to Spain under the Treaty of Lisbon in 1668.

Their borders with Morocco are the only physical borders between Europe and Africa and they have for centuries existed as places of contested sovereignty that ebb and flow alongside other political tensions in the region.

Now the US footnote has brought that back into focus.

The full paragraph in the budget bill reads as follows: “The Committee notes that the cities of Ceuta and Melilla, which are under Spanish administration, are situated on Moroccan territory and have long been the subject of a claim by Morocco. The Committee supports the Secretary of State’s efforts to encourage a diplomatic compromise between Morocco and Spain on the future status of Ceuta and Melilla”.

This is the furthest the US has gone in publicly entertaining Moroccan sovereignty claims.

Advertisement

A confluence of factors — war, diplomacy, economics, new alliances, along with the US budget bill — indirectly affecting the tiny overseas cities mean that some see an opportunity for Morocco to reclaim them.

One Spanish paper asks: "The perfect storm over the waters of the Strait?" Another talks of "diplomatic fears that Trump may settle scores with Sánchez via Ceuta and Melilla following the “first warning” at the Capitol".

This follows repeated instances of Prime Minister Pedro Sánchez's government refusing to toe the line from Washington and champion a left-leaning foreign policy.

One column, in Israel outlet Ynet, claims that these tensions between Madrid and Washington could put Spanish sovereignty in danger.

The columnist, Amine Ayoub, described as a pro-Israeli Moroccan columnist, writes: "Spain's refusal to allow the United States access to its Rota and Morón military bases during the Iran campaign, its consistent failure to meet NATO defense spending targets, and Prime Minister Pedro Sánchez's confrontational posture toward the Trump administration have collectively generated something rare in North African affairs: a genuine crack in Spain's strategic armor over Ceuta and Melilla."

"The geopolitical earthquake shaking Washington's relationship with Madrid has opened an unexpected window for one of the Arab world's oldest territorial grievances," Ayoub writes.

When contacted by 20minutos, Ayoub denied having conveyed any official position from the Israeli authorities and insists he wrote a purely personal analysis. He also suggested that Rabat is not planning any diplomatic manoeuvres in the near future.

Israel and Spain have in recent years clashed very publicly over Gaza, with the latter arguably being the most progressive country internationally in recent years.

“I wrote my opinion on how Israel could help Morocco in Ceuta and Melilla,” Ayoub said.

“Morocco could be pushed by the United States and Israel to reclaim the two cities for reasons of national security in the Strait of Gibraltar area and having a partner like Pedro Sánchez’s current Spain, although Rabat has no short-term interest in the matter,” he added.

Advertisement

The Trump administration has moved closer to Rabat in recent years, especially during his second term. His is the first in American history to recognise Morocco’s full sovereignty over Western Sahara, another contested territory in the region.

Similarly, Rabat has remained within the Abraham Accords dispute ongoing conflict and humanitarian crises in the Middle East.

Spanish diplomatic sources, however, admit they saw this coming, telling El Espaňol: “We just had to wait; Sánchez has been seeking confrontation with Trump from day one,” sums up a veteran source, who interprets the text as “a first warning from the Capitol”.

That's not all. In March the American Enterprise Institute's Michael Rubin, described Spain as "a colonial power running colonies across the Strait of Gibraltar" and suggested Washington recognise Moroccan claims.

Furthermore, Mario Díaz-Balart, chairman of the House Subcommittee on National Security, claimed recently that Ceuta and Melilla "are not in the geographic territory of Spain" and it should be "established, negotiated, and discussed between friends and allies."

Read the whole story
bogorad
13 hours ago
reply
Sanches is a moron.
Barcelona, Catalonia, Spain
Share this story
Delete

In Higher Ed, the Constitution is Optional. DEI is Not. // Universities emphasize diversity, equity, and inclusion over civics—and produce uninformed citizens hostile to free expression.

1 Share
  • Curricular imbalance: Data indicates that 51 percent of surveyed institutions mandate diversity, equity, and inclusion (DEI) coursework as a graduation requirement, while none require economics and only 15 percent require U.S. government or American history.
  • Supplanting traditional civics: Evidence suggests that schools are actively replacing foundational civic education with DEI curricula, as only three of the 61 campuses that mandate DEI also require a course on U.S. government or American history.
  • Institutional priority shifts: Public flagships, private research universities, and liberal arts colleges prioritize DEI requirements over core civic knowledge, even in states where legislative mandates exist for U.S. history and government.
  • Educational outcomes: Academic surveys demonstrate that current college students exhibit significant deficiencies in historical and civic literacy, alongside a measurable decline in support for free expression and a tolerance for the use of violence to suppress campus speech.
  • Instructional impact: Experimental research indicates that exposure to DEI-focused course materials can heighten perceptions of prejudice and increase punitive instincts among students.
  • Proposed reforms: Recommendations for addressing these issues include reinstating foundational requirements in economics and constitutional history, reducing the mandatory nature of DEI curricula, and increasing legislative oversight to ensure academic reforms are implemented with fidelity.

The faculty, administrators, and trustees who establish graduation criteria at America’s most prominent colleges and universities have made a clear set of judgments about what every educated citizen should know. Their choices suggest that familiarity with diversity, equity, and inclusion (DEI) is more essential than an understanding of economics, American history, and the Constitution.

City Journal’s College Rankings track graduation requirements across a wide range of prominent colleges and universities, including top public flagships, elite private research universities, leading liberal arts colleges, and the Ivies. While none of the 120 schools currently ranked requires an economics class to graduate, and only 15 percent stipulate a course in U.S. government or American history, 51 percent mandate coursework organized around the conceptual vocabulary of diversity, equity, and inclusion. These schools—which have long produced a disproportionate share of the nation’s lawyers, judges, editors, executives, and senior civil servants—make clear to their students that material centered on race, gender, power, and inequality is essential, while material on the U.S. Constitution, American history, and sound economics is not.

Finally, a reason to check your email.

Sign up for our free newsletter today.

First Name*
Last Name*
Email*
Sign Up
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply.
Thank you for signing up!

Strikingly, every type of higher education institution in City Journal’s College Rankings prioritizes DEI coursework over basic civic education. Among private schools (Ivy, “Ivy Plus,” private research universities, and private liberal arts colleges), course requirements in U.S. government or history are virtually nonexistent, appearing at fewer than one in ten schools. Despite legislative mandates in nine states obligating students to take U.S. government and American history, fewer than one in three public universities makes these classes necessary for graduation. 

By contrast, mandatory DEI courses are widespread, appearing in the general education curriculum of 27 percent of Ivy League and Ivy Plus universities; 45 percent of private research universities; 57 percent of private liberal arts colleges; and 59 percent of public flagships. For example, Georgetown University now demands that all students complete a “Seminar in Race, Power, and Justice” to develop “a baseline vocabulary for discussing racial difference and marginalization.” The “Difference, Power, & Equity” prerequisite at Williams College “encourages students to confront and reflect on the operations of difference, power and equity” and seeks to “provide students with critical tools they will need to be responsible agents of change.” Students at the University of Vermont must take a class on “Race and Racism” that teaches the “meaning and significance of power and privilege.”

Regardless of one’s view of these priorities, nothing prevents coursework in U.S. government, American history, and DEI from coexisting within a general education curriculum. As Figure 1 shows, however, they almost never do. Only three of the 61 campuses mandating a DEI course for graduation also require a class on U.S. government or American history. In practice, then, colleges and universities are not crafting graduation requirements that supplement a dedicated focus on civic education with insights from DEI—instead, they are supplanting curriculum on U.S. government and American history with courses that emphasize identity, power, and inequality.

Figure 1: DEI and U.S. Government/American History Requirements at Prominent Colleges and Universities

Source: ACTA’s What Will They Learn? measure of required U.S. Government/American History classes, combined with author’s review of each school’s general education requirements.

The consequences of these curricular priorities are increasingly hard to ignore. Today’s college students know very little about American history or the functioning of American democracy. ACTA’s 2024 survey found that 60 percent could not correctly identify term lengths for senators and U.S. representatives; 48 percent wrongly believe that the president has the power to declare war; and only 31 percent know that James Madison was the father of the Constitution. We should not be surprised that an approach to higher education that makes U.S. government and American history optional is producing students who are historically uninformed, civically illiterate—and hostile to basic norms of free expression. In 2025, the Foundation for Individual Rights and Expression’s annual College Free Speech Rankings survey asked students whether six controversial speakers from across the political spectrum should be “allowed” to speak on campus. None of the speakers were endorsed for a campus appearance by a majority of students. Further, 34 percent said that “using violence” to stop a campus speech can be “acceptable.”

What colleges and universities choose to make mandatory may be just as important in shaping students’ perceptions, judgments, and moral instincts as what they leave optional. Studies of corporate training programs, for example, have found that DEI initiatives can generate a backlash effect, increasing racial resentment and negative perceptions of the workplace among some employees. More directly, experimental evidence suggests that assigned DEI coursework can meaningfully influence student attitudes. In a 2024 study by researchers at Rutgers University’s Social Perception Lab and the Network Contagion Research Institute, students exposed to short excerpts from widely assigned DEI authors became more likely to perceive prejudice in ambiguous situations and expressed stronger punitive impulses toward imagined offenders. If even brief exposure to assigned DEI material can measurably increase suspicion and strengthen retaliatory instincts, sustained semester-long exposure in a required course is likely to produce deeper and more lasting effects.

Of course, none of this is inevitable. These outcomes reflect a series of intentional curricular choices. These choices can be revisited and reversed. Two recent proposals have identified a path forward. Last month, citing polling data on socialism’s growing popularity among young Americans, Samuel Abrams argued that colleges and universities should add economics to their graduation requirements. A few days later, ACTA’s National Commission on American History and Civics Education called on colleges to require a foundational course in U.S. history and government centered on the Declaration of Independence, the Constitution, the Federalist Papers, the Emancipation Proclamation, the Gettysburg Address, and the major texts of the civil rights movement. These proposals build on long-standing traditions in general education, can be implemented at modest cost using existing faculty, and would meaningfully strengthen students’ civic and economic understanding.

But adding these courses to the required curriculum is only part of the solution. Colleges and universities should also reconsider whether to include DEI coursework among the small set of subjects every student must complete to earn a degree. Courses on race, inequality, identity, and power can remain widely available, but mandating such courses while leaving economics, constitutional government, and American history optional reflects badly skewed educational priorities.

Colleges and universities can correct this imbalance. Trustees, presidents, and faculty senates determine what every graduate must know. Many have recently used that authority to force-feed DEI courses while deemphasizing basic economic and civic literacy. The same authority, exercised differently, could restore a more balanced and beneficial core curriculum.

If institutions refuse to act, legislatures can—and should. The curricular data from public flagships discussed above show that state mandates can compel schools to make U.S. government and American history part of general education. The Civics Alliance’s American History Act offers a particularly promising model, providing legislators with a detailed framework for reestablishing American history and constitutional government as foundational elements of undergraduate education.

Public universities often adapt in ways that violate the spirit (if not the letter) of laws mandating curriculum. In California, some institutions have diluted the state’s American Institutions requirement by letting students satisfy it through courses in Africana Studies, American Indian Studies, or Chicano Studies rather than through direct study of American history or constitutional government. Similarly, publicly funded universities in 12 states that have enacted statutory bans on DEI programming still require DEI coursework. Legislation can set the direction of reform, but only sustained oversight can ensure that reform is implemented faithfully.

A university that makes the Constitution optional but DEI mandatory has lost sight of its civic purpose. Reviving that purpose begins with restoring the curriculum.

Kevin Wallsten is a professor of political science at California State University, Long Beach, and an adjunct fellow at the Manhattan Institute, where his work focuses on higher education reform and the City Journal College Rankings.

Read the whole story
bogorad
13 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

What the Data Center Debate Gets Wrong - by Shawn Regan

1 Share

LLM (google/gemini-3.1-flash-lite-20260507) summary:

  • Project Approval: officials in northern utah have authorized a massive hyperscale data center campus in box elder county
  • Capacity Scales: the facility is planned to reach a nine gigawatt capacity which represents more than double current state energy usage
  • Public Opposition: local community members and environmental groups have expressed concerns regarding potential impacts on water air quality and regional resources
  • Institutional Governance: data center operations occur within existing legal frameworks that dictate how electricity and water rights are managed
  • Water Rights Markets: developers generally acquire water by purchasing established rights from agricultural users rather than depleting common pools
  • Conservation Technology: modern facilities utilize closed loop cooling systems that significantly reduce net consumptive water loss compared to agricultural practices
  • Energy Autonomy: current state legislation enables new projects to generate independent power which mitigates negative pressure on existing residential electrical grids
  • Policy Effectiveness: broad moratoria are ineffective compared to transparent regulatory systems that specifically manage resource scarcity and allocation

Courtesy George Frey/AFP via Getty Images

In a remote part of northern Utah, where sagebrush extends for miles and the nearest homes are few and far between, officials just voted to approve one of the largest data center projects in American history.

The proposal, backed by Canadian investor Kevin O’Leary, would transform a large swath of Box Elder County into a massive “hyperscale” data-center campus. At full buildout, it could reach a capacity of nine gigawatts—more than double the state’s current electricity consumption—and cover an area the size of Washington, D.C.

The project—like many others across the country—has become a lightning rod. County officials had delayed the vote after pushback from residents, and crowds packed local meetings to voice concerns. Environmentalists warned of dire consequences for water, air quality, and the surrounding region.

And yet, as Utah governor Spencer Cox put it last week, the site itself is about as ideal as it gets. It is remote, adjacent to a major natural gas pipeline, and far from residential neighborhoods. “If you can’t put this here,” Cox said, “then we can’t put them anywhere.”

These tensions are not unique to Utah. Across the country, data centers are becoming a flashpoint in local and national politics, with communities raising alarms about water use, electricity demand, and the broader implications of artificial intelligence. Some policymakers are now calling for moratoria on new data centers. In a few cases the backlash has taken a dark turn, with threats and vandalism aimed at public officials and industry leaders.

Something about the issue has clearly struck a nerve. But the debate over data centers is not just heated—it is becoming increasingly detached from the policies and institutions that actually govern the centers’ impacts on surrounding communities.

The discourse tends to treat resource use in the simplest possible terms. Water is treated as if it were simply “taken” from a shared pool. Electricity demand is assumed to translate directly into higher residential energy bills. These framings are intuitive, but they are often divorced from how these resources are actually governed in practice. Data centers don’t operate in a vacuum. They operate within legal and institutional frameworks that determine who can use water, how power is supplied, and how competing demands are resolved.

The real issue, then, isn’t whether data centers use too much water or energy, but whether the policies and institutions that govern those resources are equipped to handle these new demands, and where they fall short. That is the conversation we are not having. Instead, the debate defaults to panic, moratoria, and blunt prohibitions, making it harder to see where real reform is actually needed.

Consider an issue that has drawn some of the most intense scrutiny: water use. Data centers rely on water to keep servers from overheating. That demand has drawn concern, especially in arid regions. In Utah, opponents of the Box Elder project have pointed to the rapid decline of the nearby Great Salt Lake and warned that the new data center could exacerbate already strained supplies.

In recent months, a growing body of analysis has pushed back against claims that data centers are uniquely water-intensive. In aggregate, they are not. Compared to agriculture, golf courses, or even beer production, total data center water consumption is relatively modest. In many regions, it is a rounding error. Earlier facilities relied on evaporative cooling, which continuously vents water to the atmosphere. But newer data centers, including the proposed Utah project, use closed-loop recirculating systems that cycle the same water repeatedly.

But the relevant question is not how much water data centers use in total. A data center could account for a tiny share of statewide water consumption and still trigger serious local conflicts if it draws from a scarce aquifer or competes with other users for a common supply. Conversely, it could use a meaningful amount of water without much controversy if that water is acquired through existing rights, transferred from lower-value uses, or returned to the system in ways that preserve downstream flows.

In other words, the impact of data center water use is not determined by gallons alone. It depends on the policies that determine how water rights are governed.

In Utah—as in much of the American West—water is not an open-access resource. It is governed by well-defined rights that can be bought, sold, and transferred. New users must acquire water not by simply diverting or pumping at will, but by purchasing rights from existing holders. This process forces a comparison between competing uses and creates a mechanism—price—for deciding which ones persist.

In the case of the Box Elder project, its developers have so far secured rights to 1,900 acre-feet of water—roughly what a mid-sized Utah farm might use annually to irrigate 400 to 500 acres of alfalfa or hay. Those rights were acquired from an agricultural user, not carved out of a common pool at others’ expense. The data center’s water use doesn’t increase total withdrawals from the system; it transfers an existing allocation from one user to another. The developers say they plan to purchase rights to roughly 3,000 acre-feet in total for the project.

The institutional details matter even more than that, however. When it comes to the shrinking of the Great Salt Lake, the relevant question isn’t how many gallons a project uses in the abstract. It’s how consumptive that use actually is compared to what it replaced.

For example, with agricultural irrigation, a significant portion of the water applied to a field is lost to evapotranspiration and never returns to the watershed. In a closed-loop data center, by contrast, consumptive loss is near zero, and periodic “flushing” of the system returns water to the watershed that was previously depleted by the agricultural operation. On balance, that means the project may be net neutral, or even a modest net positive, for the inflows to the Great Salt Lake.

Furthermore, the data center operates under the same basic water constraints as any other user. It cannot simply increase its consumption at will. If it needs more water than originally projected, it must secure additional, existing water rights from willing sellers.

The same pattern shows up in debates over energy. The Utah project’s scale has fueled fears that it will overwhelm the state’s electricity grid and drive up rates for existing customers. But recent legislation in Utah creates a framework that addresses precisely this concern, allowing projects like this one to generate its own power on-site rather than drawing from the existing grid. Under this model, the project’s energy demands don’t hit the existing grid at all, and officials say it may even supply excess power back to the grid, which could result in lower prices for residential consumers. Again, the issue is not whether the project uses energy. It is how that energy is supplied, and under what legal and policy constraints.

Share

These distinctions are rarely part of the public conversation. Instead, concerns about water, energy, and land use are often bundled together and treated as if they call for a single, sweeping response. The result is less a coherent policy framework than a kind of ambient opposition to “data centers” as such, often resulting in calls for moratoria or outright bans.

That is why proposals to simply “pause” data center development are so misguided. As the progressive energy and climate scholar Holly Buck recently argued, bans on data centers do little to slow AI development. Instead, they simply shift it elsewhere, often to places with weaker safeguards, while sidestepping the real policy questions at hand. Writer and policy advocate Nat Purser put it more succinctly: “pausing isn’t policy.” Attempting to address many distinct issues through a blanket moratorium makes it less likely that any of them get addressed.

The policy lens reveals where resource governance systems are working and where they are not. In Utah, surface water rights are well-defined and tradable. The state has also closed the Great Salt Lake basin to new groundwater claims, meaning data centers can’t simply drill new groundwater wells to satisfy their water demands. Instead, they must compete in an existing market for scarce rights.

Not every place looks like Utah. In parts of Arizona, for example, the picture is more complicated. Most of the state’s existing data centers operate within a rigorous “assured water supply” policy framework that requires municipal water providers to demonstrate 100-year supply sufficiency before committing water to new large users. This helps ensure that data centers’ water demands don’t come at the expense of existing users. But a growing pipeline of proposed projects sits outside those boundaries, where groundwater regulation is limited or absent.

The importance of these institutional frameworks is illustrated by a recent episode in Tucson. When the city council voted unanimously to block a data center last year on environmental grounds, it voided a negotiated deal requiring the developer to fund $100 million in reclaimed water infrastructure and commit to returning more water to the system than it consumed. The project proceeded anyway under county jurisdiction—drawing on the same aquifer, but with fewer constraints.

The question is not whether data centers use water, or energy, or land. Everything does. The question is whether the systems governing those resources are equipped to handle new demands—and where they aren’t, what it would actually take to fix them. Figuring that out requires a different kind of debate than the one we’re currently having.

Share

Read the whole story
bogorad
17 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

More Federal Job Changes Are Coming // Hiring based on skills, not credentials

1 Share
  • Federal Workforce Reforms: The Office of Personnel Management is eliminating obsolete job descriptions to modernize and align government hiring practices with private-sector "broad banding" strategies.
  • Human Connection: AI-driven romantic relationships lack genuine reciprocity and human depth, failing to replicate the real-world trials that strengthen the individual soul.
  • Regulatory Burdens: Excessive and rigid zoning and building codes create artificial barriers that hinder the establishment of new, small-scale educational institutions.
  • Legislative Reform: Policymakers are encouraged to revise one-size-fits-all construction regulations to facilitate the expansion of school choice and learning alternatives.
  • Public Policy Consequences: Critical observations highlight a recurring tendency among progressive policymakers to ignore the predictable, often negative outcomes of their implemented programs.

Hiring based on skills, not credentials

May 6, 2026

Forwarded this email? Sign up for free to have it sent directly to your inbox.  

Good morning,

 

Today, we’re looking at the federal government’s workforce reforms, AI relationships, and the challenges that come with opening new schools.

 

Write to us at editors@city-journal.org with questions or comments.

Now, on to the news…

Trump’s Federal Workforce Reforms Are About More Than Just Reducing Headcount

Photo credit: BRENDAN SMIALOWSKI / Contributor / AFP via Getty Images

Last week, the Office of Personnel Management (OPM) announced that it was removing 115 job descriptions from its list of occupations. “Elevator operator,” for instance, hasn’t been used in years. And categories like “bowling equipment repairing” and “buffing and polishing” are woefully outdated.

“The goal,” Judge Glock writes, “is to make the federal government look more like private-sector employers, which have in recent decades adopted the practice of ‘broad banding’—placing many different jobs under a general description and giving managers more discretion to hire and set pay for those positions.”

This should come as welcome news. Most employees won’t lose their jobs—they’ll work under new titles and requirements.

Read more about the changes.

AI Romance and Existential Despair

An increasing number of individuals have claimed to fall in love with—and some even “married”—AI chatbots. Is this a new low for the state of modern love?

“AI relationship enthusiasts often say that what drew them to chatbots was the attraction of never-ending constancy, supportiveness, withholding judgment, and well, fun,” Joseph Figliolia writes. Valuable attributes in a partner, to be sure. But chatbots can’t think, they have no perspective, and they can’t offer genuine reciprocity.

It may be true that some AI relationships are harmless, especially for those who are ill or disabled and just looking for more joy in their lives. But, Figliolia observes, “the ups and downs of fortune, and life’s inevitable ceremony of losses, are the real crucible for the soul, not an LLM platform.”

Read more about AI relationships and what they mean for society today.

/ Read More / Share It Shouldn’t Be So Hard to Open Schools

Since the pandemic, families have been flocking to better learning options for their children. But not everyone is able to do so, given the zoning and land-use rules that make it almost impossible for new schools to open in some areas.

“Most states have building codes that treat schools the same if they have six students or more,” Charles Mitchell writes. “That means the same stairwell-width requirements for 2,000-student mega-schools as for 20-child microschools, for instance, which doesn’t make much sense. If a non-school space will be used as a school, it usually needs to be reclassified, triggering expensive upgrades.”

It’s time for lawmakers to change these one-size-fits-all regulations, he argues. There’s one state in particular that offers a path forward. Read about it here.

Beware the Bubble — in the Bond Market – Manhattan Institute Senior Fellow Allison Schrager in Bloomberg Mayor Mamdani’s New Scam: Charge NYC Taxpayers to Hire His Rent-a-Mobs – Manhattan Institute Legal Policy Fellow John Ketcham and Adjunct Fellow Christian Browne in the New York Post Self-Checkout Is Under Fire Across the Country. Is Theft Really the Reason? – Manhattan Institute Legal Policy Fellow C. Jarrett Dieterle in Reason
/ Editors’ Picks
What Killed Spirit Airlines – Jeffrey A. Tucker in The Epoch Times The GOP whispers about JD Vance are getting louder – Matthew Bartlett in MS NowThe Muslim Brotherhood Threat to the United States – Steven Stalinsky in RealClearWorld The Bureaucratization of Assisted Suicide – Héctor Cárdenes Roque in Law & LibertyNASA releases more than 12,000 images from historic Artemis II moon mission – Mary Kekatos in ABC News
/ Reader Spotlight

“A hallmark of progressives is their innate ability to convince themselves the most obvious and inevitable consequences of their policies can’t happen.”

  

A quarterly magazine of urban affairs, published by the Manhattan Institute, edited by Brian C. Anderson.

Copyright © 2026 Manhattan Institute, All rights reserved.

Read the whole story
bogorad
17 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories