Strategic Initiatives
11298 stories
·
45 followers

Exclusive | Google to Pay $2.4 Billion in Deal to License Tech of Coding Startup, Hire CEO - WSJ

1 Share
  • Google's Agreement: Agreed to pay $2.4 billion to license AI coding startup Windsurf's technology.

  • Deal's Context: Deal follows stalled talks between OpenAI and Windsurf.

  • Key Actions: Hiring some Windsurf employees and licensing its technology.

  • Hiring Focus: Employees will focus on agentic coding within DeepMind.

  • Licensing Details: Google will gain a nonexclusive license to some of Windsurf’s technology.


Read the whole story
bogorad
6 minutes ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Can AI Solve the Content-Moderation Problem? - WSJ

1 Share
  • AI and Content Moderation: Social media platforms struggle with misinformation and offensive content, despite existing moderation systems.

  • User Discontent: A significant portion of users feel content moderation standards are not applied fairly and desire more control over the content they see.

  • Generative AI Solution: Researchers are developing YouTube filters using large language models (LLMs) to analyze video content for harmful material based on various metrics.

  • AI Capabilities and Limitations: Some AI models show promise in identifying harmful language, but current systems are limited to text, are expensive, and may struggle with nuanced social interactions.

  • Ethical Concerns: Personalized AI filters could allow users to avoid harmful content but also enable the spread of hate speech, shift responsibility from platforms to users, and fail to address content shared with others.


Read the whole story
bogorad
12 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Radar Trends to Watch: July 2025

1 Share

While there are many copyright cases working their way through the court system, we now have an important decision from one of them. Judge William Alsup ruled that the use of copyrighted material for training is “transformative” and, hence, fair use; that converting books from print to digital form was fair use; but that the use of pirated books in building a library for training AI was not.

Now that everyone is trying to build intelligent agents, we have to think seriously about agent security—which is doubly problematic because we already haven’t thought enough about AI security and issues like prompt injection. Simon Willison has coined the term “lethal trifecta” to describe the combination of problems that make agent security particularly difficult: access to private data, exposure to untrusted content, and the ability to communicate with external services.

Artificial Intelligence

  • Researchers have fine-tuned a model for locating deeds that include language to prevent sales to Black people and other minorities. Their research shows that, as of 1950, roughly a quarter of the deeds in Santa Clara county included such language. The research required analyzing millions of deeds, many more than could have been analyzed by humans.
  • Google has released its live music model, Magenta RT. The model is intended to synthesize music in real time. While there are some restrictions, the weights and the code are available on Hugging Face and GitHub.
  • OpenAI has found that models that develop a misaligned persona can be retrained to bring their behavior back inline.
  • The Flash and Pro versions of Gemini 2.5 have reached general availability. Google has also launched a preview of Gemini 2.5 Flash-Lite, which has been designed for low latency and cost.
  • The site lowbackgroundsteel.ai is intended as a repository for pre-AI content—i.e., content that could not have been generated by AI.
  • Are the drawbridges going up? Drew Breunig compares the current state of AI to Web 2.0, when companies like Twitter started to restrict developers connecting to their platforms. Drew points to Anthropic cutting off Windsurf, Slack blocking others from searching or storing messages, and Google cutting ties with Scale after Meta’s investment.
  • Simon Willison has coined the phrase “lethal trifecta” to describe dangerous vulnerabilities in AI agents. The lethal trifecta arises from the combination of private data, untrusted content, and external communication.
  • Two new papers, “Design Patterns for Securing LLM Agents Against Prompt Injections” and “Google’s Approach for Secure AI Agents,” address the problem of prompt injection and other vulnerabilities in agents. Simon Willison’s summaries are excellent. Prompt injection remains an unsolved (and perhaps unsolvable) problem, but these papers show some progress.
  • Google’s NotebookLM can turn your search results into a podcast based on the AI overview. The feature isn’t enabled by default; it’s an experiment in search labs. Be careful—listening to the results may be fun, but it takes you further from the actual results.
  • AI-enabled Barbie™? This I have to see. Or maybe not.
  • Institutional Books is a 242B token dataset for training LLMs. It was created from public domain/out-of-copyright books in Harvard’s library. It includes over 1M books in over 250 languages.
  • Mistral has launched their first reasoning model, Magistral, in two versions: a Small version (open source, 24B) and a closed Medium version for enterprises. The announcement stresses traceable reasoning (for applications like law, finance, and healthcare) and creativity.
  • OpenAI has launched o3-pro, its newest high-end reasoning model. (It’s probably the same model as o3, but with different parameters controlling the time it can spend reasoning.) LatentSpace has a good post on how it’s different. Bring lots of context.
  • At WWDC, Apple announced a public API for its on-device foundation models. Otherwise, Apple’s AI-related announcements at WWDC are unimpressive.
  • Simon Willison’s “The Last Six Months in LLMs” is worth reading; his personal benchmark (asking an LLM to generate a drawing of a pelican riding a bicycle) is surprisingly useful!
  • Here’s a description of tool poisoning attacks (TPA) against systems using MCP. TPAs were first described in a post from Invariant Labs. Malicious commands can be included in the tool metadata that’s sent to the model—usually (but not exclusively) in the description field.
  • As part of the New York Times copyright trial, OpenAI has been ordered to retain ChatGPT logs indefinitely. The order has been appealed.
  • Sandia’s new “brain-inspired” supercomputer, designed by SpiNNcloud, is worth watching. There’s no centralized memory; memory is distributed among processors (175K cores in Sandia’s 24-board system), which are designed to mimic neurons.
  • Google has updated Gemini 2.5 Pro. While we wouldn’t normally get that excited about an update, this update is arguably the best model available for code generation. And an even more impressive model, Gemini Kingfall, was (briefly) seen in the wild.
  • Here’s an MCP connector for humans! The idea is simple: When you’re using LLMs to program, the model will often go off on a tangent if it’s confused about what it needs to do. This connector tells the model how to ask the programmer whenever it’s confused, keeping the human in the loop.
  • Agents appear to be even more vulnerable to security vulnerabilities than the models themselves. Several of the attacks discussed in this paper involve getting an agent to read malicious pages that corrupt the agent’s output.
  • OpenAI has announced the availability of ChatGPT’s Record mode, which records a meeting and then generates a summary and notes. Record mode is currently available for Enterprise, Edu, Team, and Pro users.
  • OpenAI has made its Codex agentic coding tool available to ChatGPT Plus users. The company’s also enabled internet access for Codex. Internet access is off by default for security reasons.
  • Vision language models (VLMs) see what they want to see; they can be very accurate when answering questions about images containing familiar objects but are very likely to make mistakes when shown counterfactual images (for example, a dog with five legs).
  • Yoshua Bengio has announced the formation of LawZero, a nonprofit AI research group that will create “safe-by-design” AI. LawZero is particularly concerned that the latest models are showing signs of “self-preservation and deceptive behavior,” no doubt referring to Anthropic’s alignment research.
  • Chat interfaces have been central to AI since ELIZA. But chat embeds the results you want, in lots of verbiage, and it’s not clear that chat is at all appropriate for agents, when the AI is kicking off lots of new processes. What’s beyond chat?
  • Slop forensics uses LLM “slop” to figure out model ancestry, using techniques from bioinformatics. One result is that DeepSeek’s latest model appears to be using Gemini to generate synthetic data rather than OpenAI. Tools for slop forensics are available on GitHub.
  • Osmosis-Structure-0.6b is a small model that’s specialized for one task: extracting structure from unstructured text documents. It’s available from Ollama and Hugging Face.
  • Mistral has announced an Agents API for its models. The Agents API includes built-in connectors for code execution, web search, image generation, and a number of MCP tools.
  • There is now a database of court cases in which AI-generated hallucinations (citations of nonexistent case law) were used.

Programming

  • Martin Fowler and others describe the “expert generalist” in an attempt to counter increasing specialization in software engineering. Expert generalists combine one (or more) areas of deep knowledge with the ability to add new areas of depth quickly.
  • Duncan Davidson points out that, with AI able to crank out dozens of demos in little time, the “art of saying no” is suddenly critical to software developers. It’s too easy to get lost in a flood of decent options while trying to pick the best one.
  • You’ll probably never need to compute a billion factorials. But even if you don’t, this article nicely demonstrates optimizing a tricky numeric problem.
  • Rust is seeing increased adoption for data engineering projects because of its combination of memory safety and high performance.
  • The best way to make programmers more productive is to make their job more fun by encouraging experimentation and rest breaks and paying attention to issues like appropriate tooling and code quality.
  • What’s the next step after platform engineering? Is it platform democracy? Or Google Cloud’s new idea, internal development platforms?
  • A study by the Enterprise Strategy Group and commissioned by Google claims that software developers waste 65% of their time on problems that are solved by platform engineering.
  • Stack Overflow is taking steps to preserve its relevance in the age of AI. It’s considering incorporating chat, paying people to be helpers, and adding personalized home pages where you can aggregate important technical information.

Web

  • Is it time to implement HTTP/3? This standard, which has been around since 2022, solves some of the problems with HTTP/2. It claims to reduce wait and load times, especially when the network itself is lossy. The Nginx server, along with the major browsers, all support HTTP/3.
  • Monkeon’s WikiRadio is a website that feeds you random clips of Wikipedia audio. Check it out for more projects that remind you of the days when the web was fun.

Security

  • Cloudflare has blocked a DDOS attack that peaked at 7.3 terabits/second; the peak lasted for about 45 seconds. This is the largest attack on record. It’s not the kind of record we like to see.
  • How many people do you guess would fall victim to scammers offering to ghostwrite their novels and get them published? More than you would think.
  • ChainLink Phishing is a new variation on the age-old phish. In ChainLink Phishing, the victim is led through documents on trusted sites, well-known verification techniques like CAPTCHA, and other trustworthy sources before they’re asked to give up private and confidential information.
  • Cloudflare’s Project Galileo offers free protection against cyberattacks for vulnerable organizations, such as human rights and relief organizations that are vulnerable to denial-of-service (DOS) attacks.
  • Apple is adding the ability to transfer passkeys to its operating systems. The ability to import and export passkeys is an important step toward making passkeys more usable.
  • Matthew Green has an excellent post on cryptographic security in Twitter’s (oops, X’s) new messaging system. It’s worth reading for anyone interested in secure messaging. The TL;DR is that it’s better than expected but probably not as good as hoped.
  • Toxic agent flows are a new kind of vulnerability in which an attacker takes advantage of an MCP server to hijack a user’s agent. One of the first instances forced GitHub’s MCP server to reveal data from private repositories.

Operations

  • Databricks announced Lakeflow Designer, a visually oriented drag-and-drop no code tool for building data pipelines. Other announcements include Lakebase, a managed Postgres database. We have always been fans of Postgres; this may be its time to shine.
  • Simple instructions for creating a bootable USB drive for Linux—how soon we forget!
  • An LLM with a simple agent can greatly simplify the analysis and diagnosis of telemetry data. This will be revolutionary for observability—not a threat but an opportunity to do more. “The only thing that really matters is fast, tight feedback loops.”
  • DuckLake combines a traditional data lake with a data catalog stored in an SQL database. Postgres, SQLite, MySQL, DuckDB, and others can be used as the database.

Quantum Computing

  • IBM has committed to building a quantum computer with error correction by 2028. The computer will have 200 logical qubits. This probably isn’t enough to run any useful quantum algorithm, but it still represents a huge step forward.
  • Researchers have claimed that 2,048-bit RSA encryption keys could be broken by a quantum computer with as few as a million qubits—a factor of 20 less than previous estimates. Time to implement postquantum cryptography!

Robotics

  • Denmark is testing a fleet of robotic sailboats (sailboat drones). They’re intended for surveillance in the North Sea.


Read the whole story
bogorad
13 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

@AntSpeaks: Hitler and the Grand Mufti’s 1...

1 Share
  • Meeting Context: The Grand Mufti of Jerusalem, Haj Amin al-Husseini, met with Hitler in Berlin due to a shared agenda against British rule and Jewish immigration to Palestine.

  • Al-Husseini's Background: A prominent Arab leader, al-Husseini fled to Europe after opposing British rule and Jewish settlement, seeking Nazi support for Arab independence.

  • Mutual Goals: The Nazis saw al-Husseini as an ally to destabilize the Middle East, while he sought German aid to prevent a Jewish homeland in Palestine.

  • Propaganda and Recruitment: Al-Husseini broadcast anti-Jewish messages and helped recruit Muslim SS units, aligning his anti-Zionist views with Nazi ideology.

  • Legacy and Modern Echoes: The collaboration amplified Nazi propaganda, and the text suggests the modern slogan 'from the river to the sea' echoes al-Husseini's historical aim to expel Jews from the region.


@AntSpeaks: Hitler and the Grand Mufti’s 1...

@AntSpeaks

Hitler and the Grand Mufti’s 1941 Meeting 🧵:

1. Many will be familiar with the somewhat famous picture below. It depicts Haj Amin al-Husseini, the Grand Mufti of Jerusalem, meeting with Hitler in Berlin. Some might find it rather odd, considering Hitler’s belief in racial superiority, that the two would converse, let alone strike up a sort of friendship. But that's because there was a far bigger agenda at play here.

image-1

2. Al-Husseini, a prominent Arab leader from the region known as Palestine at the time, fiercely opposed British rule and Jewish immigration. After playing a leading role in the 1936-30 Arab Revolt, he fled to Lebanon an later to Iraq. In 1941, he supported a pro-Nazi coup there in hopes of ending British influence. Eventually his ambitions would take him to Germany.

image-1

3. When the Iraqi coup collapsed, al-Husseini escaped to Fascist Italy, then occupied by Nazi Germany. The Nazis saw him as a valuable ally to stir up trouble for the British in the Middle East. For his part, he wanted German support to secure Arab independence and stop Jewish settlement in Palestine. It was a mutual opportunity.

image-1

4. At the Reich Chancellery, al-Husseini thanked Hitler for his anti-Jewish stance, saying Arabs and Germans faced the same enemies.. the British, Jews, and Communists. He pressed for a German statement opposing a Jewish homeland in Palestine region and aid for an Arab revolt.

image-1

5. Hitler, preoccupied with the Soviet campaign, wasn’t ready to commit fully but promised help once Germany reached the Caucasus. He swore to wipe out the “Jewish element” in the region, a pledge that resonated with al-Husseini’s desire to expel Jews from the Palestine region.

image-1

6. Their mutual animosity toward Jews helped solidify their relationship. Al-Husseini’s anti-Semitism, closely linked to his opposition to Zionism, aligned with core elements of Nazi ideology. He described Jews as a 'danger' to Arab society, while Hitler portrayed them as a global threat.

image-1

7. Hitler’s promises provided al-Husseini with a platform to advance his propaganda. The Mufti viewed Nazi support as a means to eliminate the Jewish presence in the Palestine region, while Hitler leveraged al-Husseini’s religious influence to portray the Axis powers as defenders of Islam against alleged 'Jewish conspiracies.'

It was during this that Hitler expressed a sort of admiration for Islam, for its perceived strength and discipline, seeing it as a militant faith aligned with his ideals. This respect helped him build alliances with Muslim leaders like al-Husseini.

image-1

8. Al-Husseini settled in Berlin, pocketing 750,000 Reichsmarks monthly from the Nazis. He broadcast venomous anti-Jewish messages on Radio Berlin, calling on Arabs to “kill Jews wherever you find them.” These echoed Nazi extermination rhetoric, spreading poison across the Middle East.

image-1

9. He also helped recruit Muslim SS units, like the Bosnian 13th Waffen-SS Division, which carried out brutal acts. His propaganda mixed Nazi racial hatred with his anti-Zionist calls, framing Jews as Islam’s enemies. While some Arabs listened, support for the Axis wasn’t universal in the region.

image-1

10. The Nazis leaned on al-Husseini’s voice to boost their Arabic propaganda efforts. His broadcasts and pamphlets peddled anti-Jewish conspiracies, gaining traction in parts of the Middle East. But Nazi views of Arabs as inferior limited their trust, keeping Middle East plans on the back burner.

image-1

11. It is highly probable that al-Husseini played a role in shaping the Holocaust’s 'Final Solution.' He reportedly met with Adolf Eichmann to discuss blocking Jewish emigration from Europe to facilitate extermination, and claimed to have known about the death camps following visits in 1942. These accounts suggest he encouraged the Nazis plans for mass extermination.

Adolf Eichmann:

image-1

12. The practical impact of the meeting was limited, largely due to the Axis powers’ military setbacks and shifting priorities. Despite al-Husseini’s hopes, no significant Arab uprisings against the British or Jewish communities materialized during World War II. The Nazis remained primarily focused on their campaigns in Europe rather than expanding their influence in the Middle East.

However, al-Husseini’s collaboration with the Nazis had lasting effects beyond the battlefield. His alliance helped to amplify Nazi anti-Jewish propaganda across the Arab world, intertwining his anti-Zionist objectives with Nazi ideology in a way that cast a long and troubling shadow on the history of the region.

image-1

13. Today’s chant “from the river to the sea,” used to reject Israel’s existence, mirrors the Nazi-backed dream of al-Husseini to purge Jews from the Palestine region. Those shouting it ironically claim to hate Nazism but are blind to how their words echo its anti-Jewish goals from history’s darkest corners.

As for the meaning behind the chant? Ask and most will not even be able to answer that question or they'll pretend to not know because otherwise it would expose their agenda an hatred.

/End.

image-1

If you enjoy my threads and want to support the time and research behind them, any support is greatly appreciated and can be given via the link below. I’ll keep creating engaging threads that provide education and understanding amidst the noise.

buymeacoffee.com/antspeaks1


Read the whole story
bogorad
22 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Dr. ChatGPT Will See You Now | WIRED

2 Shares
  • ChatGPT's Medical Successes: A Reddit user found relief from a five-year jaw clicking issue using ChatGPT's suggested treatment.

  • Viral Trend: Similar stories of accurate diagnoses from AI are spreading on social media, including from medical scans.

  • Case Study: Courtney Hofmann's son, undiagnosed for three years, received a correct diagnosis from ChatGPT.

  • Changing Landscape: AI is changing how individuals seek medical advice, shifting from 'Dr. Google' to 'Dr. ChatGPT'.

  • Potential and Challenges: Harvard Medical School instructor Adam Rodman sees potential but also emphasizes the need for careful and informed use of AI in healthcare.


A poster on Reddit lived with a painful clicking jaw, the result of a boxing injury, for five years. They saw specialists, got MRIs, but no one could give them a solution to fix it, until they described the problem to ChatGPT. The AI chatbot suggested a specific jaw-alignment issue might be the problem and offered a technique involving tongue placement as a treatment. The individual tried it, and the clicking stopped. “After five years of just living with it,” they wrote on Reddit in April, “this AI gave me a fix in a minute.”

The story went viral, with LinkedIn cofounder Reid Hoffman sharing it on X. And it’s not a one-off: Similar stories are flooding social media—of patients purportedly getting accurate assessments from LLMs of their MRI scans or x-rays.

Courtney Hofmann’s son has a rare neurological condition. After 17 doctor visits over three years and still not receiving a diagnosis, she gave all of his medical documents, scans, and notes to ChatGPT. It provided her with an answer—tethered cord syndrome, where the spinal cord can’t move freely because it’s attached to tissue around the spine—that she says physicians treating her son had missed. “He had surgery six weeks from when I used ChatGPT, and he is a new kid now,” she told a New England Journal of Medicine podcast in November 2024.

Consumer-friendly AI tools are changing how people seek medical advice, both on symptoms and diagnoses. The era of “Dr. Google” is giving way to the age of “Dr. ChatGPT.” Medical schools, physicians, patient groups, and the chatbots’ creators are racing to catch up, trying to determine how accurate these LLMs’ medical answers are, how best patients and doctors should use them, and how to address patients who are given false information.

“I’m very confident that this is going to improve health care for patients,” says Adam Rodman, a Harvard Medical School instructor and practicing physician. “You can imagine lots of ways people could talk to LLMs that might be connected to their own medical records.”

Rodman has already seen patients turn to AI chatbots during his own hospital rounds. On a recent shift, he was juggling care for more than a dozen patients when one woman, frustrated by a long wait time, took a screenshot of her medical records and plugged it into an AI chatbot. “She’s like, ‘I already asked ChatGPT,’” Rodman says, and it gave her the right answer regarding her condition, a blood disorder.

Rodman wasn’t put off by the exchange. As an early adopter of the technology and the chair of the group that guides the use of generative AI in the curriculum at Harvard Medical School, he thinks there’s potential for AI to give physicians and patients better information and improve their interactions. “I treat this as another chance to engage with the patient about what they are worried about,” he says.

The key word here is potential. Several studies have shown that AI is capable in certain circumstances of providing accurate medical advice and diagnoses, but it’s when these tools get put in people’s hands—whether they’re doctors or patients—that accuracy often falls. Users can make mistakes—like not providing all of their symptoms to AI, or discarding the right info when it is fed back to them.

Read the whole story
cherjr
15 hours ago
reply
48.840867,2.324885
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Trump’s Big Beautiful Gift to Anduril

1 Share
  • Monopoly Granted: The 'One Big Beautiful Bill Act' effectively grants Anduril Industries a monopoly on new surveillance towers for U.S. Customs and Border Protection.

  • Bipartisan Support for AI: The legislation, signed by President Trump, allocates over $6 billion for border security technologies, including expanding the 'virtual wall' with AI-powered surveillance towers.

  • Anduril's Competitive Edge: Anduril's Sentry Tower line, featuring 'autonomous' capabilities using machine learning, positioned the company to dominate the market, outperforming competitors like Elbit and General Dynamics.

  • Definition of 'Autonomous': The bill defines 'autonomous' as systems using AI or machine learning to detect and track objects without continuous human intervention, aligning with Anduril's product capabilities.

  • Privacy Concerns: Critics, like the Electronic Frontier Foundation, express concern over mandated AI at the border, potential privacy violations, and the enrichment of contractors at the expense of border communities' rights.


Anduril Industries is a major beneficiary of the One Big Beautiful Bill Act, which includes a section that essentially grants the weapons firm a monopoly on new surveillance towers for U.S. Customs and Border Protection across the southern and northern borders.

The legislation, signed into law by President Donald Trump on July 4, provides significant spending increases to military and law enforcement projects, including over $6 billion for various border security technologies. Among these initiatives is expanding the ever-widening “virtual wall” of sensor-laden surveillance towers along the U.S.-Mexico border, where computers increasingly carry out the work of detecting and apprehending migrants.


Related

Google Is Helping the Trump Administration Deploy AI Along the Mexican Border


Anduril, today a full-fledged military contractor, got its start selling software-augmented surveillance towers to CBP. Anduril has pitched its Sentry Tower line on the strength of its “autonomous” capabilities, which use machine learning software to perpetually scan the horizon for possible objects of interest — i.e. people attempting to cross the border — rather than requiring a human to monitor sensor feeds. Thanks to bipartisan support for the vision of a border locked down by computerized eyes, Anduril has become a dominant player in border surveillance, edging out incumbents like Elbit and General Dynamics.

Now, that position looks to be enshrined in law: A provision buried in the new mega-legislation stipulates that none of the $6 billion border tech payday can be spent on border towers unless they’ve been “tested and accepted by U.S. Customs and Border Protection to deliver autonomous capabilities.”

The bill defines “autonomous” as “a system designed to apply artificial intelligence, machine learning, computer vision, or other algorithms to accurately detect, identify, classify, and track items of interest in real time such that the system can make operational adjustments without the active engagement of personnel or continuous human command or control.”

Anduril is now the country’s only approved border tower vendor.

That reads like a description of Anduril’s product — because it might as well be. A CBP spokesperson confirmed to The Intercept that under the new law, Anduril is now the country’s only approved border tower vendor. Although CBP’s plans for border surveillance tend to be in flux, Homeland Security presentation documents have cited the need for hundreds of new towers in the near future, money that for the time being will only be available to Anduril.

Anduril did not respond to a request for comment.

In a statement to The Intercept, Dave Maass, investigations director at the Electronic Frontier Foundation and a longtime observer of border surveillance technologies, raised concerns about the apparent codification of militarized AI at the border. “I was cynically expecting Trump’s bill to quadruple-down on wasteful surveillance technology at the border, but I was not expecting language that appears to grant an exclusive license to Anduril to install AI-powered towers,” he said. “For 25 years, surveillance towers have enriched influential contractors while delivering little security, and it appears this pattern isn’t going to change anytime soon. Will this mean more towers in public parks and AI monitoring the everyday affairs of border neighborhoods? Most likely. Taxpayers will continue footing the bill while border communities will pay an additional price with their privacy and human rights.”

The law’s stipulation is not only a major boon to Anduril, but a blow to its competitors, now essentially locked out of a lucrative and burgeoning market until they can gain the same certification from CBP. In April, The Intercept reported on a project to add machine learning surveillance capabilities to older towers manufactured by the Israeli military contractor Elbit, and General Dynamics’ Information Technology division has spent years implementing an upgraded version of its Remote Video Surveillance System towers. The fate of these companies’ border business is now unclear.

Elbit did not respond to a request for comment. General Dynamics Information Technology spokesperson Jay Srinivasan did not respond directly when asked about Anduril’s border exclusivity deal, stating, “We can’t speculate on the government’s acquisition strategy and the different contracts it intends to use to exercise the funding,” and pointed to the company’s existing surveillance tower contracts.

Towers like Anduril’s Sentry have proven controversial, hailed by advocates in Silicon Valley and Capitol Hill as a cheaper and more humane way of stopping illegal immigration than building a physical wall, but derided by critics as both ineffective and invasive. Although border towers are frequently marketed with imagery of a lone edifice in a barren desert, CBP has erected surveillance towers, which claim to offer detailed, 24/7 visibility for miles, in populated residential areas. In 2024, the Government Accountability Office reported CBP’s surveillance tower program failed to address all six of the main privacy protections that were supposed to be in place, including a rule that “DHS should collect only PII [Personally Identifiable Information] that is directly relevant and necessary to accomplish the specified purpose(s).”

The new Trump spending package emphasizes the development and purchase of additional autonomous and unmanned military hardware that could prove favorable for Anduril, which has developed a suite of military products that run on machine learning-centric software. One section, for instance, sets aside $1.3 billion for “for expansion of unmanned underwater vehicle production,” an initiative that could dovetail with Anduril’s announcement last year that it would open a 100,000–150,000 square foot facility in Rhode Island dedicated to building autonomous underwater vehicles. Another sets aside $200 million for “the development, procurement, and integration of mass-producible autonomous underwater munitions,” which could describe Anduril’s Copperhead line of self-driving torpedoes, announced in April.

The bill also earmarks billions for suicide attack drones and counter-drone weaponry, technologies also sold by Anduril. Anduril is by no means the only contractor who can provide this weaponry, but it already has billions of dollars worth of contracts with the Pentagon for similar products, and enjoys a particularly friendly relationship with the Trump administration.

Trae Stephens, Anduril’s co-founder and executive chairman, served on Trump’s transition team in 2016 and was reportedly floated for a senior Pentagon position late last year. Michael Obadal, Trump’s nominee for Under Secretary of the Army, worked at Anduril until June, according to his Linkedin, and has come under fire for his refusal to divest his Anduril stock.

Anduril founder Palmer Luckey is also longtime Trump supporter, and has hosted multiple fundraisers for his presidential campaigns.

Following Trump’s reelection last fall, Luckey told CNBC he was sanguine about his company’s fortunes in the new administration. “We did well under Trump, and we did better under Biden,” he said of Anduril. “I think we will do even better now.”

Update: July 9, 2025, 4:45 p.m. ET
This story has been updated to specify that CBP is contracting with General Dynamics’ Information Technology business unit for its border tower program.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories