Strategic Initiatives
12048 stories
·
45 followers

Something Big Is Happening — matt shumer

1 Share
  • PRE-CRISIS ANALOGY: Current societal indifference toward Artificial Intelligence mirrors the period immediately preceding the 2020 global pandemic lockdowns.
  • EXPERT OBSOLESENCE: High-level technical practitioners report that current AI models now independently execute complex, multi-day projects, including design and autonomous testing.
  • INTELLIGENCE RECURSION: Implementation of AI in the development process has reached a stage where models are actively utilized to engineer their successor generations.
  • COGNITIVE DISPLACEMENT: Expert analysis predicts that approximately 50% of entry-level white-collar positions across law, finance, and medicine face elimination within one to five years.
  • ACQUISITION OF JUDGMENT: The latest iterations of AI demonstrate capabilities beyond rote logic, exhibiting traits resembling human taste, nuances in decision-making, and professional intuition.
  • VELOCITY OF PROGRESS: Technical benchmarks indicate AI task-completion capabilities are doubling every four to seven months, moving toward independent operation for weeks at a time.
  • STRATEGIC ADAPTATION: Professional survival necessitates immediate immersion in paid, high-tier AI tools to maintain a competitive advantage before widespread adoption occurs.
  • NATIONAL SECURITY IMPLICATIONS: Industry leaders equate the arrival of advanced AI to the emergence of a digital nation of superhuman intellects, presenting unprecedented global risks and opportunities.


Think back to February 2020.

If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.

I think we're in the "this seems overblown" phase of something much, much bigger than Covid.

I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

I should be clear about something up front: even though I work in AI, I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies... OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold the same as you... we just happen to be close enough to feel the ground shake first.

But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way.


I know this is real because it happened to me first

Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

Let me give you an example so you can understand what this actually looks like in practice. I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.

I'm not exaggerating. That is what my Monday looked like this week.

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

I've always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren't incremental improvements. This is a different thing entirely.

And here's why this matters to you, even if you don't work in tech.

The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers... it was just a side effect of where they chose to aim first.

They've now done it. And they're moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.

"But I tried AI and it wasn't that good"

I hear this constantly. I understand it, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming.

I think of my friend, who's a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won't work. It's not built for his specialty, it made an error when he tested it, it doesn't understand the nuance of what he does. And I get it. But I've had partners at major law firms reach out to me for advice, because they've tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it's like having a team of associates available instantly. He's not using it because it's a toy. He's using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it'll be able to do most of what he does before long... and he's a managing partner with decades of experience. He's not panicking. But he's paying very close attention.

The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. They're blown away by what it can already do. And they're positioning themselves accordingly.


How fast this is actually moving

Let me make the pace of improvement concrete, because I think this is the part that's hardest to believe if you're not watching it closely.

In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.

There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.

But even that measurement hasn't been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR's graph to show another major leap.

If you extend the trend (and it's held for years with no sign of flattening) we're looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.

Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.

Let that land for a second. If AI is smarter than most PhDs, do you really think it can't do most office jobs?

Think about what that means for your work.


AI is now building the next AI

There's one more thing happening that I think is the most important development and the least understood.

On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

"GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

Read that again. The AI helped build itself.

This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code" at his company, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He says we may be "only 1–2 years away from a point where the current generation of AI autonomously builds the next."

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.


What this means for your job

I'm going to be direct with you because I think you deserve honesty more than comfort.

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.

Let me give you a few specific examples to make this tangible... but I want to be clear that these are just examples. This list is not exhaustive. If your job isn't mentioned here, that does not mean it's safe. Almost all knowledge work is being affected.

Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The managing partner I mentioned isn't using AI because it's fun. He's using it because it's outperforming his associates on many tasks.

Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these competently and is improving fast.

Writing and content. Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals can't distinguish AI output from human work.

Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.

Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.

Customer service. Genuinely capable AI agents... not the frustrating chatbots of five years ago... are being deployed now, handling complex multi-step problems.

A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can't replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I'm not sure I believe it anymore.

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I don't know. Maybe not. But I've already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow.

I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.

Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects.


What you should actually do

I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.

Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what's actually worth using.

Second, and more important: don't just ask it quick questions. That's the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If you're a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you're in finance, give it a messy spreadsheet and ask it to build the model. If you're a manager, paste in your team's quarterly data and ask it to find the story. The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.

And don't assume it can't do something just because it seems too hard. Try it. If you're a lawyer, don't just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you're an accountant, don't just ask it to explain a tax rule. Give it a client's full return and see what it finds. The first attempt might not be perfect. That's fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here's the thing to remember: if it even kind of works today, you can be almost certain that in six months it'll do it near perfectly. The trajectory only goes one direction.

This might be the most important year of your career. Work accordingly. I don't say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up: by being the person who understands what's coming and can show others how to navigate it. That window won't stay open long. Once everyone figures it out, the advantage disappears.

Have no ego about it. The managing partner at that law firm isn't too proud to spend hours a day with AI. He's doing it specifically because he's senior enough to understand what's at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It's not. No field is.

Get your financial house in order. I'm not a financial advisor, and I'm not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.

Think about where you stand, and lean into what's hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn't happening.

Rethink what you're telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. I'm not saying education doesn't matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they're genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.

Your dreams just got a lot closer. I've spent most of this section talking about threats, so let me talk about the other side, because it's just as real. If you've ever wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. I'm not exaggerating. I do this regularly. If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month... one that's infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you've been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you're passionate about. You never know where they'll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

Build the habit of adapting. This is maybe the most important one. The specific tools don't matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.

Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new... something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.


The bigger picture

I've focused on jobs because it's what most directly affects people's lives. But I want to be honest about the full scope of what's happening, because it goes well beyond work.

Amodei has a thought experiment I can't stop thinking about. Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?

Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."

He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.

The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself... these researchers genuinely believe these are solvable within our lifetimes.

The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can't predict or control. This isn't hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.

The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it's too powerful to stop and too important to abandon. Whether that's wisdom or rationalization, I don't know.


What I know

I know this isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.

I know the next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in my world. It's coming to yours.

I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.

And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when it's too late to get ahead of it.

We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.

It's about to.


If this resonated with you, share it with someone in your life who should be thinking about this. Most people won't hear it until it's too late. You can be the reason someone you care about gets a head start.


Thank you to Kyle Corbitt, Jason Kuperberg, and Sam Beskind for reviewing early drafts and providing invaluable feedback.

Follow me on X for new models, workflows, and products worth using. Or join the email list.

Follow @mattshumer_

Get early access to future reviews and builds.

Read the whole story
bogorad
29 minutes ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

EU-US tensions over Greenland and tech are far from over, says Macron // French president calls on bloc to take the necessary steps to become a true global economic power

1 Share
  • US Instability: French President Emmanuel Macron warns the EU against complacency, stating that tensions with the US over trade, technology, and issues like Greenland are not over and that the bloc faces "permanent instability" from American actions under the Trump administration.
  • "Greenland Moment": Macron urges EU leaders to use the recent "Greenland moment," where President Trump's actions caused alarm, as a catalyst for economic reforms and to reduce dependence on the US and China.
  • Chinese "Tsunami": He describes an "economic revolution" needed for the EU to become a global power, facing a "Chinese tsunami" of cheap goods and calling for a "European preference" policy to favor bloc companies in strategic sectors.
  • AI, Energy, Defense Investment: Macron advocates for the EU to raise new common debt to jointly invest in three key areas: AI and quantum computing, the energy transition, and defense, to achieve global economic power status.
  • Tech Regulation Clashes: He predicts future clashes with the US over digital regulation, warning that the EU's stricter rules on data privacy and digital taxation could lead to American tariffs.
  • Sovereignty and Children's Data: Macron emphasizes that protecting European children's data from monetization by large tech platforms is an act of sovereignty and key to a strong Europe.
  • WTO Rules Disregard: He asserts that both the US and China disregard World Trade Organization rules, and without protecting European industries, the EU risks being "swept away."
  • Defense Project Disagreements: Despite calls for EU defense cooperation, a flagship Franco-German fighter jet project faces potential collapse due to disagreements between French and German contractors over leadership and work-sharing.

The EU should not be lulled into a false sense of security that tensions with the US over Greenland, technology and trade are over, French President Emmanuel Macron has warned, as he called on the bloc to embark on an “economic revolution” and finally become a true global power.

Macron said he would press his fellow EU leaders at a special summit on competitiveness this week to capitalise on what he called “the Greenland moment”, when Europeans realised they were under threat, so as to move ahead quickly with long-delayed economic reforms and reduce their dependence on the US and China.

“We have the Chinese tsunami on the trade front, and we have minute-by-minute instability on the American side. These two crises amount to a profound shock — a rupture for Europeans,” Macron told the FT and other European media outlets in an interview.

“My point was to say that, when there is some relief after a crisis peaks, you shouldn’t just let your guard down thinking it’s over for good. That isn’t true, because there is permanent instability now.”

Macron said he was “always respectful, predictable” when dealing with US President Donald Trump, “but not weak”.

“When there’s a clear act of aggression, I think what we should do isn’t bow down or try to reach a settlement. I think we’ve tried that strategy for months. It’s not working.”

Europe was now dealing with a Trump administration that was “openly anti-European”, “shows contempt” for the EU and “wishes its dismemberment”, Macron said.

Macron urged the EU not to ‘bow down’ to the ‘openly anti-European’ Trump administration © Kamil Zihnioglu for Le Monde

At the same time, he depicted an EU under siege from cheap Chinese goods. He urged his EU counterparts to adopt a “European preference” policy to favour the bloc’s own companies and technologies in strategic sectors such as electric vehicles, renewable energy and chemicals.

Macron once again called on the EU to raise massive new common debt to invest jointly in three innovation “battles” — AI and quantum computing, the energy transition and defence — so that the bloc could become a global economic power.

Macron said the recent crisis over Greenland, when Trump threatened punitive tariffs against European countries opposed to his effort to secure control of the vast Arctic island from Denmark, was “not over”.

He also predicted the EU and the Trump administration would clash later this year over tech regulation — an area where the EU has long irritated the US for applying stricter rules on data privacy, hate speech and digital taxation.

“The US will, in the coming months — that’s certain — attack us over digital regulation,” Macron said, adding the Trump administration could hit the EU with tariffs if the bloc used its landmark Digital Services Act to control US tech companies.

The US could also retaliate against EU countries, including France and Spain, that are planning to ban children from social media, which would lay down a test for the bloc, he said.

“For Europeans to say that our children’s brains are not for sale, that our children’s emotions are not to be monetised by major American or Chinese platforms — that is sovereignty. And that is a powerful Europe.”

EU leaders are due to meet at a Belgian castle on Thursday to inject fresh momentum into flagging efforts to boost competitiveness and deepen integration of the single market.

Macron said he supported attempts to further simplify EU regulations, break down barriers to intra-bloc trade and reduce dependencies on foreign suppliers for critical inputs and technologies.

But the discussions are likely to be dominated by a long-standing French push for the EU to protect key industries through “buy European” policies, with the European Commission due to unveil legislation on the issue this month.

Macron said it was important to protect European content in critical value chains, including chemicals, steel, cars and defence.

“Because today we are facing in particular two major champions that no longer respect World Trade Organization rules,” he added, referring to the US and China. “So if we do not agree to protect . . . to re-establish fair terms of trade, we will simply be swept away.”

Macron’s often lonely campaign for EU strategic autonomy, first laid out in a landmark speech at Sorbonne University in 2017, has been vindicated by recent efforts by the US and China to weaponise economic dependencies against Europe.

But France’s political and fiscal crises and Macron’s domestic weakness have made it harder for Paris to deliver what it has long proposed. There is also resistance in some EU capitals that regard many of Macron’s proposals as contradictory to the bloc’s single market and its free trade principles.

Macron said ‘two major champions [the US and China] no longer respect World Trade Organization rules’ © Kamil Zihnioglu for Le Monde

Even as Macron called for more EU defence co-operation, a flagship Franco-German fighter-jet project has been pushed to the brink of collapse by a power struggle between the contractors involved — Dassault Aviation and Airbus.

Despite months of talks, Macron and German Chancellor Friedrich Merz have not been able to find a compromise over the leadership of the jet’s development and share of work between the companies, prompting insiders to all but declare the Future Combat Air System dead.

Macron refuted accusations that the FCAS was going to fail, saying that the French and German air forces had again recently agreed on the strategic need for the fighter jet and finalised its specifications.

“The French assessment is that FCAS is a very good project, and I have not heard a single German voice telling me that it is not a good project,” Macron said. “It would be absurd to have a French standard and a German standard for sixth-generation combat aircraft.”

Criticising the defence contractors for trying to “game the system and create dis-synergies”, he promised to speak again to Merz about the FCAS programme. “I believe that things should advance.”

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The Ideology Behind Sustainable Agriculture's Failures

1 Share
  • Program Funding: The 2025 Regenerative Pilot Program allocates US$700 million in federal funds toward sustainable farming initiatives.
  • Ideological Shift: The sustainable agriculture movement has transitioned from pragmatic soil conservation to an ideology often characterized by hostility toward modern scientific innovation.
  • Input Hypocrisy: Definitions of "organic" are frequently adjusted by regulatory bodies to allow chemical pesticides and fertilizers that are necessary for crop viability.
  • Biotechnology Rejection: Movement advocates often oppose molecular genetic engineering despite its precision compared to traditional, more chaotic breeding methods like radiation-induced mutagenesis.
  • Environmental Benefits: Genetically modified crops have historically reduced insecticide use and enabled no-till farming, which significantly decreases soil erosion and runoff.
  • Land Efficiency: High-yield conventional farming is essential for conservation, as lower-yield organic methods require significantly more land to produce equivalent food volumes.
  • Economic Failure: Sri Lanka’s 2021 nationwide ban on chemical inputs resulted in a 20% drop in rice production, skyrocketing food prices, and a severe humanitarian crisis.
  • Nutritional Obstruction: Ideological opposition has blocked life-saving innovations like Golden Rice, which was engineered to combat Vitamin A deficiency in developing nations.

The Trump administration’s US$700 million Regenerative Pilot Program, announced in late 2025, is one of the most significant federal investments in sustainable farming practices in recent history. It was greeted with widespread, almost reflexive approval by everyone from environmental groups and farm organisations to public-health advocates. But few people paused to ask what exactly “regenerative” and “sustainability” mean. 

Beneath the agreeable language of soil restoration and ecological harmony lies a modern “sustainable agriculture” movement that rejects some of the technologies responsible for the greatest environmental and humanitarian advances in the history of food production. What began as a legitimate critique of soil erosion, chemical misuse, and monoculture farming has hardened into something closer to ideology, often characterised by suspicion of modern science and hostility to innovation.

Early advocates of sustainable agriculture focused on measurable outcomes: reducing erosion, improving nutrient efficiency, conserving water and protecting its quality, and preserving long-term productivity. Their questions were pragmatic—what works, under what conditions, at what cost?

These advocates became prominent in the 1970s and 1980s, often working outside mainstream agricultural institutions. Wes Jackson at The Land Institute championed perennial polyculture systems that mimicked prairie ecosystems, arguing that annual monocultures inherently degraded soil. His research teams spent decades developing perennial grain crops that could anchor topsoil while producing good yields, in order to address the erosion crisis that had plagued industrial agriculture.

Meanwhile, farmers like the Rodale family in Pennsylvania championed organic agriculture. The Rodale Institute farm compared organic and conventional methods, measuring variables such as soil carbon levels, earthworm populations, water infiltration rates, and economic returns. They claimed positive results for organic versus conventional agriculture that have not been replicated in real-world settings.

The sustainable agriculture working groups that formed during this era—often combining farmers, agronomists, and soil scientists—focused on replicable techniques. They documented cover cropping strategies that built nitrogen naturally, tested integrated pest management protocols that reduced chemical inputs without sacrificing pest control, and refined no-till methods that preserved soil structure.

Over time, however, sustainability rhetoric shifted. Practices were judged not by their environmental footprint but by whether they “looked natural.” Inputs were condemned not because they caused harm, but because they were somehow “unnatural.” Scale itself became suspect, as if large farms were inherently less ethical than small ones. This transformation mirrors broader cultural trends: the romanticisation of pre-industrial systems, suspicion of expertise, and a moral elevation of “naturalness” that has little grounding in reality.

Yet, the true believers continue to reject conventional “industrial”farming in favour of more “natural,” “organic,” “sustainable”—and significantly more expensive—offerings, ignoring the fact that industry and governments are constantly tweaking the definition of “organic” in ways that permit the use of ever more chemical fertilisers and pesticides because organic farmers would be unable to function without them.  

Nowhere is the paradoxical nature of the sustainable agriculture movement more obvious than in its hostility toward molecular genetic engineering. Genetically modified organisms (GMOs) are routinely portrayed as unnatural, risky, or ethically suspect—despite decades of research, regulatory scrutiny, and real-world use demonstrating otherwise.

Traditional plant breeding—by means of random mutagenesis caused by chemicals or radiation, wide crosses, or chromosome doubling—is celebrated as natural, even though it may introduce thousands of uncharacterised genetic changes. Conversely, molecular techniques, which insert or modify specific genes with exquisite precision, are condemned as reckless. From a scientific standpoint, the distinction is nonsensical. There in fact, a seamless continuum of techniques for genetic modification from ancient times to the era of molecular biology.

The irony is that many of the environmental benefits touted by sustainability advocates were made possible by biotechnology. Insect-resistant crops have reduced reliance on chemical insecticides. Herbicide-tolerant crops have enabled no-till and conservation-till farming at scales previously impossible, dramatically reducing soil erosion, runoff, and, possibly, carbon loss.

One of the most persistent myths in sustainability discourse is that higher yield is somehow morally suspect. Higher yields are framed as evidence of exploitation—of soil, ecosystems, or farmers themselves. Low-input, low-yield systems are praised as inherently superior. But yield is not a vanity metric. It is the single most important determinant of agriculture’s environmental footprint. Producing more food on less land reduces pressure on forests, wetlands, and grasslands. It limits habitat destruction. It lowers emissions per unit of food produced. The cumulative impact is significant, as agricultural economist Graham Brookes has pointed out:

In 2020, the extra global production of the four main crops in which GM technology is widely used (85 million tonnes) [soybeans, corn, cotton, and canola], would have, if conventional production systems been used, required an additional 23.4 million ha of land to be planted to these crops.

Ironically, a movement that claims to prioritise biodiversity routinely endorses practices that require more land to produce the same amount of food.

The costs of anti-technology "sustainability" are not confined to abstract environmental models. A calamity was created in Sri Lanka in 2021 when the country’s president abruptly instituted a nationwide ban on the importation and use of chemical fertilisers and pesticides and required the country’s two million farmers to switch to organic farming. This significantly worsened Sri Lanka’s food security crisis, leading to drastic reductions in crop yields (especially in rice and tea), soaring food prices, and increased imports.

As Lionel Alva pointed out in 2021:

Sri Lanka’s economy is structured in such a way that it depends heavily on imports for many essential commodities. Tea and coffee were some of the country’s primary exports. With organic farming, the end outcome was harsh and quick. Domestic rice output plummeted 20% in the first six months, despite assertions that organic methods can deliver equivalent yields to conventional cultivation. Sri Lanka, which had previously been self-sufficient in rice production, has been compelled to import $450 million worth of rice, despite domestic rice prices rising by roughly 50%. The embargo also harmed the country’s tea harvest, which is its main export and source of foreign currency.


Another detrimental effect of flawed agriculture policy is resistance to nutritionally enhanced crops engineered to address micronutrient deficiencies. In regions where rice or maize dominates diets, vitamin deficiencies remain a leading cause of preventable disease. There are GMO crops designed to address these problems, but their introduction has often been delayed or prevented due to ideological opposition.

In the Philippines, for example, there have been years-long attempts to introduce a product called Golden Rice, which has been genetically modified to contain beta-carotene, the precursor of vitamin A, thus helping prevent vitamin A deficiency in developing nations where white rice is the major source of calories. According to the last Expanded National Nutrition Survey (ENNS) in the Philippines, an estimated 15.5 percent of infants and children aged six months to five years were vitamin A deficient. This level meets the World Health Organization (WHO) classification of a moderate public health problem (10–20 percent prevalence). Golden Rice could have been a viable solution, and in 2021, the Philippines became the first country to permit its commercial cultivation, but it met with continuing, relentless opposition from activists; and in 2024, the Filipino Court of Appeals withdrew permission for the growth of Golden Rice in the country.

[

Green Acceleration

It is time to take environmentalism away from the environmentalists.

QuilletteCraig DeLancey

](https://quillette.com/2024/12/23/green-acceleration-environmentalism-nuclear-power/)

So, why has sustainable agriculture drifted so far from evidence? Part of the answer lies in its transformation from an agronomic framework into a cultural identity. “Sustainable” no longer describes a set of outcomes; it signals membership in a worldview—anti-corporate, anti-industrial, sceptical of institutions, and suspicious of scientific advances. Once sustainability becomes identity-driven, contrary evidence is dismissed. Studies are accepted or rejected based on who funded them, not how they were conducted or on the results. Consensus is reframed as corruption. None of this is an argument for uncritical adoption of every new agricultural technology, but there is a crucial difference between regulating technologies based on evidence and rejecting entire categories of innovation based on ideology.

The most damaging legacy of modern sustainable agriculture may be the false choices it imposes: nature or technology, tradition or innovation, stewardship or productivity. The most sustainable food systems of the future will integrate all of these. They will use genetic tools to develop crops that need fewer inputs, tolerate stress, and deliver better nutrition. They will manage those crops using practices that protect soil, conserve water, and enhance resilience. There is no contradiction here—except the one imposed by ideology.

We do need better agriculture. We also need healthier soils, cleaner water, lower emissions, and more resilient food systems. But we will not get there by rejecting the tools that make progress possible. True sustainability is not about looking backward. It is about using the best available evidence to move forward—feeding more people, at lower cost, on less land, more reliably and with less harm to the environment.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Should I Trust AI Chatbots for Financial Advice? - WSJ

1 Share
  • AI Financial Advice Caution: Andrew Lo advises against using current large language models like Copilot or ChatGPT as financial advisers due to their lack of empathy.
  • LLM Characteristics: Large language models are described as the digital equivalent of sociopaths: smooth, persuasive, and devoid of empathy.
  • Existing Robo-Advisers Exempt: Lo's critiques concerning LLMs do not apply to existing robo-advisers from firms like Betterment and Wealthfront, which predate the LLM era.
  • Increasing AI Use: A survey indicated that 19% of individual investors utilized ChatGPT-style AI tools for portfolio management as of August, up from 13% in 2024.
  • Lo's Future Project: Lo is developing a specialized, non-charging AI financial adviser intended to function as a true fiduciary, prioritizing client interests, potentially within four years.
  • Ethics Training Method: To instill ethical behavior, Lo proposes training the model on the complete history of U.S. financial ethics laws, regulations, and court cases to learn from past misconduct.
  • Need for Specialized Modules: The future AI adviser will require specialized modules to produce digital analogs of human qualities like empathy, humility, and fairness, as these do not arise from increased processing power alone.
  • Computational Separation: Large language models require handing off mathematical and number-crunching tasks to specialized financial-planning software because they are currently poor at calculation.

By

Peter Coy


Illustration of a dollar bill with a robot's head in place of George Washington's portrait.

Alex Nabaum for WSJ

Should you use AI for financial advice?

Andrew Lo, a finance professor at the Massachusetts Institute of Technology’s Sloan School of Management, says not yet. Large language models like Copilot or ChatGPT aren’t suited to being used as financial advisers because they are the digital equivalent of sociopaths—smooth, persuasive and devoid of empathy.

If an adviser powered by artificial intelligence “is able to communicate both good and bad financial advice with the same pleasant and convincing affect, its clients will rightfully view this as a problem,” Lo and one of his graduate students, Jillian Ross, wrote in 2024 in an article for the Harvard Data Science Review. (Today’s robo advisers, like those offered by Betterment and Wealthfront, went into action before the era of large language models and aren’t built on them for the most part, so Lo’s critiques don’t apply to them.)

Still, people are already using AI tools for help with their finances. This past August, a survey of 11,000 individual investors in 13 countries commissioned by eToro, a trading and investment platform, found that 19% were already using ChatGPT-style AI tools to manage their portfolios, up from 13% in 2024.

That worries Lo. “The AI people are using now can be dangerous, especially if the user isn’t fully aware of the biases, inaccuracies and other limits” of large language models, he wrote in an email.

Understanding ethics

Despite his reservations about current AI models, Lo believes that large language models will eventually be able to help investors—especially people with small accounts and limited experience with investing. In fact, he is working to build one that is specialized for financial advice. He doesn’t plan to charge for it, he says.

Lo’s goal is to develop an AI financial adviser that is a true fiduciary—namely, an entity that always puts the client’s interests first and tailors its advice to their particular needs, including emotional needs. He thinks it will take something less than four more years.

To get there, it will need a rich understanding of financial ethics, he says. For that, he proposes feeding the model all the laws, regulations and court cases involving questions of financial ethics in the U.S., from the Securities Act of 1933 up to the latest fraud trial.

“This rich history can be viewed as a fossil record of all the ways that bad actors have exploited unsuspecting retail and institutional clients,” he and Ross wrote. The hope is that the large language model will learn from its training what not to do.

Lo acknowledges that a large language model might use its newfound knowledge of financial rights and wrongs to choose the wrongs because LLMs don’t have ethics built in. To counter such misuse, he says, authorities will need to fight fire with fire, developing AI models that detect crime by auditing users’ tax returns, for example.

Large language models aren’t good at math, which is a problem when it comes to financial planning. Lo said the models will need to hand off the number-crunching part of the job to specialized financial-planning software.

The human touch

But knowledge is only part of the solution. An AI financial adviser will also need digital equivalents of empathy, humility and a sense of fairness, Lo says.

Those humanlike qualities won’t emerge simply by making AI more powerful, he says. Instead, AI models will require specialized modules that produce “analogs” of empathy (since as machines they can’t actually be empathetic). These modules would correspond to specialized bits of the human brain, Lo says.

Lo has taught generations of MIT students who went on to careers on Wall Street. He also developed what he calls the adaptive markets hypothesis, which uses the principles of evolution to explain behaviors such as loss aversion and overconfidence.

Evolution occurs through random variation and natural selection: The strong survive and reproduce; the weak perish. Lo wants to use a kind of computer-accelerated natural selection to spur the development of better AI models.

Peter Coy is a writer in New York. Follow him at petercoy.substack.com. He can be reached at reports@wsj.com.

Investing Monthly

Read the full report

Why AI Chatbots Can’t Be Trusted for Financial Advice: They’re Sociopaths Why AI Chatbots Can’t Be Trusted for Financial Advice: They’re Sociopaths

Stock Funds Kick Off With 2.4% Gain Stock Funds Kick Off With 2.4% Gain

Should It Be Easier to Buy Annuities in Retirement Plans? Readers Had Doubts Should It Be Easier to Buy Annuities in Retirement Plans? Readers Had Doubts

Stock Funds Rose 14.6% in 2025 Stock Funds Rose 14.6% in 2025

Financial Discipline Is an Easy Resolution to Make, but a Hard One to Keep Financial Discipline Is an Easy Resolution to Make, but a Hard One to Keep

Are Buffer ETFs too Expensive for the Protection You Get? Are Buffer ETFs too Expensive for the Protection You Get?

How to Spot—and Capitalize on—a Sector Rotation How to Spot—and Capitalize on—a Sector Rotation

Lower Your Taxes With These Five Often-Missed Credits Lower Your Taxes With These Five Often-Missed Credits

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Next in Journal Reports

[

Journal Reports

Stock Funds Kick Off With 2.4% Gain

By William Power

February 9, 2026 at 5:15 PM ET

It wasn’t a touchdown, but January was positive for stock and bond funds. Plus: A Financial Flashback to Nasdaq’s founding, 55 years ago.

](https://www.wsj.com/finance/stocks/stock-funds-kick-off-with-2-4-gain-bb8dcbc1)

More Journal Reports Articles

[

Journal Reports

The New Ways High Schools Are Teaching Teens About Money

By Joann S. Lublin

February 8, 2026 at 3:00 PM ET

](https://www.wsj.com/personal-finance/high-school-financial-education-4396b221)

[

Journal Reports

Newly Single, She Wants to Be Financially Secure—and Splurge a Bit. Can She Afford It?

By Demetria Gallegos

February 7, 2026 at 3:00 PM ET

](https://www.wsj.com/personal-finance/divorce-financial-advice-3ad22ef8)

[

Journal Reports

Where Did All the Cheap Restaurants Go?

By Chris Kornelis

February 6, 2026 at 5:00 PM ET

](https://www.wsj.com/personal-finance/food-costs-cheap-restaurants-5cd14552)


Videos

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Victory: Panama Supreme Court Boots China From Canal Control – HotAir

1 Comment
  • Strategic Victory: The United States administration achieves a primary goal of the "Donroe Doctrine" by terminating Chinese influence over the Panama Canal.
  • Judicial Ruling: Panama’s Supreme Court declared the operating contracts of Hong Kong-based CK Hutchison unconstitutional, forcing the company’s exit from the Balboa and Cristóbal ports.
  • Diplomatic Pressure: The policy shift follows a 2025 meeting where the U.S. Secretary of State informed Panamanian leadership that Chinese control of canal ports would no longer be permitted.
  • Interim Management: A subsidiary of the Danish firm A.P. Moller-Maersk will manage port operations temporarily to ensure continuity until a new bidding process is finalized.
  • Regional Influence: Panamanian President José Raúl Mulino has pivoted toward a pro-American stance, repudiating previous Belt and Road Initiative agreements with China.
  • Commercial Implications: Efforts to sell CK Hutchison to neutral entities were previously hindered by Beijing’s demands to maintain control through state-linked firms like Cosco.
  • Security Concerns: Analysts suggest that any deal allowing Chinese state-owned enterprises to retain roles in global port management remains a security risk for the Western Hemisphere.
  • Legal Finality: Despite promises of legal retaliation from China’s Ministry of Foreign Affairs, rulings by Panama’s High Court are final and offer no further path for appeal.

It took almost a year, but the White House finally chalked up its first objective in implementing the newly revitalized Monroe Doctrine. Or, as we call it, the Donroe Doctrine.

Advertisement

Its very first manifestation came almost immediately after Donald Trump's inauguration. Secretary of State Marco Rubio met with Panama president Jose Raul Mulino and told Mulino in no uncertain terms that the US would not allow China to control ports on the Panama Canal any longer. On February 3, 2025, Muloino repudiated Panama's Belt and Road Initiative agreements with China and would force the sale of control of those ports. China began a two-front strategy to reverse that decision, with parallel diplomatic and legal tracks. Diplomacy gave way to trade negotiations, which ultimately proved fruitless.

Late yesterday, so did the legal challenge. Panama's top court annulled the country's contracts with China's CK Hutchinson to operate both ports, effectively severing China from control of the Panama Canal:

The Supreme Court of Panama has annulled a contract for a Hong Kong company to operate two ports at either end of the Panama Canal, handing President Trump a victory for his security ambitions in the Western Hemisphere and denting China’s influence in the region.

The high court said the terms under which CK Hutchison runs the ports of Balboa on the Pacific Coast and Cristóbal on the Atlantic side were unconstitutional, setting the stage for the company’s departure from the port facilities. ...

Once the license is revoked, the government aims to ensure the continuity of port operations by hiring APM Terminals, a unit of shipping firm A.P. Moeller-Maersk, to manage the facilities until it opens a bidding process with new license terms, possibly separating the two ports, said Panama’s pro-American President José Raúl Mulino.

“Our ports are one of the strategic pillars for the national economy and a key link for international trade,” Mulino said in an address to the nation early Friday.

Advertisement

The departure of CK Hutchinson cuts China from any control of the ports. In a twist of irony, however, A.P. Moeller-Maersk is based in Copenhagen ... as in Denmark ... as in the country we're pushing hard to gain more control of Greenland. That will hand a tiny bit of leverage to the Danes in the negotiations, but Maersk also relies heavily on protection from the US Navy for its global shipping business, not to mention access to American ports and commercial traffic. This will almost certainly be nothing more than an amusing sideline in the months to come, but it may pop up from time to time in the Greenland discussions. 

Maersk's control of the ports will only be temporary, anyway. Mulino has already declared that Panama will open itself to bids for normal contracts for managing the two ports at either end of the Panama Canal:

Panamanian President José Raúl Mulino said that until the court’s  ruling is implemented, the state maritime authority would work with Hutchison’s Panama Ports Company at Cristobal and Balboa to assure continued smooth operations. He did not say when that would take place.

In the transition, a local subsidiary of A.P. Moller-Maersk (MAERSK-B.CO) will operate the ports until a new concession can be bid and awarded, Mulino said.

This also raises questions about the future of Hutchinson. One potential solution to the impasse had been a purchase of Hutchinson away from its China-based ownership, which would then have allowed a transition away from Chinese control of the Panama Canal and dozens of other ports. Beijing threw a spanner in the works by demanding that controlling ownership remain with another China-based company, Cosco. The sale of Hutchinson would have been more lucrative while it still controlled the Panama Canal. Now the incentives have been reduced, although now the incentives for selling Hutchinson may also be reduced now that the issue with Panama no longer exists. 

Advertisement

The New York Times notes that Cosco's involvement in the sale is likely a no-sale with Washington, even apart from the Panama Canal:

While COSCO may not have had a direct role in the Panamanian ports under the deal that was being negotiated with BlackRock, analysts said its involvement in the other 40 or so ports in the deal would allow China to be more assertive globally than if CK Hutchison had held onto them.

Ryan Berg, a director of the Americas program at Center for Strategic and International Studies, a research organization, said getting CK Hutchison out of Panama while giving COSCO a role in the other ports would be “a Pyrrhic victory for the United States,” because of COSCO’s close ties to China’s government and navy.

Under those conditions, BlackRock probably would decline the investment. CCP control of the company would make them unable to exert real control over the company, and given the nature of the Donroe Doctrine, the risk of losing more ports under US economic pressure would ramp up considerably with the structure of this proposed "acquisition." 

China insists that it will fight this decision in the courts, but ... 

China's Ministry of Foreign Affairs said Chinese companies will pursue legal action to maintain their rights to operate on the Panama Canal, calling the decision "contrary to the laws governing Panama's approval of the relevant franchises."

Just as with the US, though, decisions by Panama's Supreme Court are not appealable. They can start a lawsuit, perhaps, or use diplomatic and trade pressure to get Mulino to reverse his position. After seeing what happened to Nicolas Maduro, however, Mulino is probably less inclined than ever to get on the wrong side of the US and the Donroe Doctrine. 

Advertisement

Editor’s Note: Thanks to President Trump and his administration’s bold leadership, we are respected on the world stage, and our enemies are being put on notice.

Help us continue to report on the administration’s peace through strength foreign policy and its successes. Join Hot Air VIP and use promo code FIGHT to get 60% off your membership!

Read the whole story
bogorad
1 day ago
reply
trmup haters eating crow... again??
Barcelona, Catalonia, Spain
Share this story
Delete

Why Is Health Care Getting More Costly?

1 Share
  • DEMAND-DRIVEN-COSTS: Rising health insurance premiums, projected to increase by 9 percent in 2026, are primarily attributed to higher utilization of medical services rather than insurer profits or hospital pricing.
  • PROFIT-MARGIN-DATA: Health insurer profits represent only 0.2 percent of national healthcare expenditures, with the majority of revenue growth absorbed by medical expenses and administrative requirements.
  • PRICING-TRENDS: Average real prices paid to hospitals and physicians have decreased in recent years, with physician fees falling 18 percent and hospital care prices dropping 2 percent since 2010.
  • TECH-EXPANSION: Cost increases are driven by expanded medical capabilities, including high-priced oncology drugs and advanced surgical procedures like total knee replacements and neurosurgery.
  • UTILIZATION-VOLUME: Healthcare spending growth is largely explained by service volume, with 65 percent of county-level variation in spending linked to how much care is consumed by patients.
  • MEDICARE-DYNAMICS: Roughly 85 percent of the increase in real per-capita Medicare spending between 1997 and 2011 resulted from the introduction of newly created procedure codes.
  • MARKET-DISTORTIONS: Private insurers pay approximately 2.5 times the Medicare rate for hospital treatments, partly because patient price-insensitivity encourages hospitals to compete through expensive facility upgrades.
  • REFORM-STRATEGIES: Potential cost reductions depend on shifting payment policies to reward efficiency, such as transitioning insurance control to individual workers and avoiding supplemental funding for new technologies.

Americans have long lamented the high cost of health insurance, and the situation will soon get worse. Premiums for employer-sponsored insurance will go up by another 9 percent in 2026. Public spending on Medicare, Medicaid, and Obamacare is also surging.

This is not due to surging profits among insurers or hospitals. Insurance and entitlement programs are largely passing along the rising cost of care, even as average prices paid to hospitals, physicians, and drugmakers decline in real terms. The true driver is higher utilization of medical services—especially newly developed drugs and newly available outpatient procedures.

Technological progress could slash costs by improving diagnosis and treatment or by replacing more expensive methods of delivering care. But that won’t happen without reforming payment policy to reward cost-saving innovations. If insurers and entitlement programs keep paying premiums for products that offer little additional clinical value, these shiny new services will do little more than push costs even higher.

Insurers have borne much blame for climbing health-care costs in recent years. In the grimmest example, UnitedHealthcare CEO Brian Thompson was murdered outside a December 2024 meeting for investors in New York City. Police believe that the suspect, Luigi Mangione, was motivated by animus toward health insurers. Thompson’s death was greeted with horror and sympathy for the victim’s family but also with disturbingly widespread support for the assassin—fueled by outrage at insurers, whom many fault for blocking medical care.

Insurers are indeed responsible for assessing the validity of medical claims and paying for them. This makes them unpopular when they don’t pay profusely for health-care services but also gets them blamed for swelling costs if they do. From 2014 to 2024, average premiums for employer-sponsored insurance climbed from $6,025 to $8,951—a 12 percent increase above general inflation. Over that period, health insurers’ revenues grew by $657 billion. But most of that owed to a $589 billion increase in hospital and medical expenses, while insurers’ administrative costs (claims processing, regulatory compliance, marketing expenses, and taxes) rose by $59 billion. Insurers’ profits amounted to $9 billion—only 0.2 percent of national health-care expenditure. The growth of publicly funded expenditures has been even more significant: from 2014 to 2023, government spending on health care soared from $1.36 trillion to $2.33 trillion.

The hospital industry is often blamed for driving up health-care costs, too, but that story leaves out important facts. Hospital procedures accounted for $1.52 trillion of U.S. health-care spending in 2023. But 80 percent of U.S. hospitals are publicly owned or nonprofits. The most expensive are often small facilities in rural areas facing declining revenues and the threat of closure. Across all payers, average prices for hospital care fell 2 percent in real terms from 2010 to 2024.

Nor do rising physician fees explain the swelling expenses. Physician services accounted for $978 billion in health-care spending in 2023. Payments for treating Medicare patients are often fixed by law, and they’ve gone up only 5 percent since 2001. After accounting for inflation, physician fees across all payers have fallen 18 percent since 2010.

Price increases for prescription drugs are not the reason for rising health-care costs, either. Prescription drugs made up $450 billion of U.S. health-care spending in 2023. But from 2009 to 2018, the average real price of prescriptions for enrollees of Medicare drug plans declined by 13 percent because the use of generic drugs grew by 130 percent, while that of branded drugs fell by 34 percent.

To be sure, stable average prices can conceal the reality that some patients face higher expenses while others see lower costs. For example, federal law requires prescription-drug manufacturers to sell to Medicaid at lower prices than other purchasers. This rewards drugmakers for hiking prices for non-Medicaid patients, who end up paying almost three times as much for the same products.

Policymakers haven’t directly encouraged similar price disparities in hospital care, yet private insurers still pay roughly two and a half times the Medicare rate for hospital treatment. Whereas Medicare’s payment rates are capped by law, private insurers negotiate theirs directly with hospitals. In 2022, the average price of an inpatient admission covered by private insurance—$28,038—far exceeded the average out-of-pocket maximum of $4,355, leaving patients with little incentive to shop based on price. Instead, price-insensitive patients gravitate toward facilities with the most prestigious reputations, cutting-edge equipment, and lavish amenities.

The resulting competition to upgrade facilities has sent hospital fees spiraling upward. A Harvard University study found that, from 2010 to 2019, “hospitals investing more in capital gained market share and raised prices, whereas hospitals investing relatively less in capital lost market share and increased prices less.” These facility improvements benefit all patients, but the added costs fall primarily on those with private insurance.

Yet even if private payers have absorbed price increases for certain services, the overall escalation in health-insurance costs has a simpler explanation: Americans are consuming more medical care.

America’s consumption of most goods and services exceeds that of other countries, and health care is no exception. A 2020 study by the Organisation of Economic Co-operation and Development found that, while the U.S. spends three times as much as the average developed nation on health care, that excess reflects higher utilization—220 percent of the average—more than it reflects higher prices, which were 126 percent of the average.

A recent study published by the Journal of the American Medical Association found a similar pattern. County-level variation in health-care spending was explained 65 percent by service utilization, 24 percent by price differences, 7 percent by disease prevalence, and just 4 percent by age. Higher utilization correlated with broader insurance coverage, higher household incomes, and lower enrollment of Medicare beneficiaries in managed care.

From 2014 to 2023, prices for medical services rose by less than the overall rate of inflation. Increased enrollment has driven up Medicare costs (as baby boomers retire) and Medicaid costs (as eligibility expands). But the main factor behind higher insurance expenses has been greater utilization per enrollee, which has grown by roughly 20 percent across both public entitlements and private insurance.

Greater usage of medical services is reflected in the expanded overall number of medical practitioners: from 6.5 million in 2014 to 9.6 million in 2023. The erosion of physician fees by inflation has also been offset by an increase in volume. Average wages in health care from 2010 to 2024 went up 7 percent after inflation, with surgeons seeing a 14 percent increase.

But this trend reflects more than just more frequent contact with the health-care system. From 2000 to 2019, hospital outpatient visits shot up from 2.10 to 2.74 per capita, while inpatient stays declined slightly, from 0.12 to 0.11. The age-adjusted share of the population taking three or more medications rose only modestly, from 20 percent to 21 percent, over the same period. When patients do seek care, they are increasingly receiving more intensive and technologically advanced treatments.

The fundamental driver of rising health-care costs, then, is the expansion of medical capabilities. Demand for medical care is far less easily satisfied than demand for most other basic goods. A nation will consume only so much extra food as incomes rise, but the willingness to spend more to alleviate illness and infirmity has no real limit. From 1900 to 2023, as agriculture’s share of GDP fell from 15 percent to less than 1 percent, health care’s share went from 3 percent to 18 percent.

Consider, too, the role of technological progress. In 1960, physicians could do little for a heart-attack patient beyond prescribing bed rest and painkillers; by 2019, with new surgical procedures and numerous costly drugs available, Americans were spending $265 billion treating cardiovascular disease. New medications have transformed HIV-AIDS from an imminent death sentence into a manageable chronic condition with a near-normal life span. From 2019 to 2023, U.S. spending on cancer drugs rocketed from $65 billion to $99 billion, with 125 new oncology drugs launched—about 40 percent focused on cancers for which no treatments had previously existed. From 2000 to 2022, age-adjusted cancer mortality dropped 29 percent; HIV-AIDS mortality fell by 73 percent.

Effective new anti-obesity medications have recently emerged. From 2018 to 2023, spending on Glucagon-Like Peptide-1 (GLP-1) drugs leaped from $14 billion to $71 billion. As their net price averaged $8,412 in 2023 and almost half of Americans express interest in consuming drugs to lose weight, these drugs, too, are causing health-insurance costs to surge.

Until the late nineteenth century, most invasive surgeries were often fatal. But advances in diagnostic tools, surgical techniques, and medical devices have made surgery a viable option for a wide range of conditions. As procedures have become more precise, recovery times shorter, and complications rarer, surgeons have grown more willing to operate in cases once considered too risky.

Americans are seeking a greater number of procedures, and the increase is concentrated among the most expensive ones. In the first decade of the twenty-first century, the proportion of Americans living with a total knee replacement doubled. From 2005 to 2021, the volume of physician services delivered per Medicare beneficiary increased by 45 percent in orthopedics, 50 percent in neurosurgery, 115 percent in ophthalmology, and 130 percent in surgical oncology. The more medical science can do for the sick, the more insurance is needed to pay for it. In 2019, the average price tag for heart bypass surgery was $57,240, while 85 percent of cancer drugs introduced over the past five years cost more than $100,000.

The growth of Medicare spending owes largely to the addition of new services and procedures. From 1997 to 2011, 85 percent of the increase in the program’s real per-capita spending came from newly created procedure codes. In 2019, spending on the ten most expensive Medicare outpatient procedures from 2009 had risen by less than the general rate of inflation, yet overall outpatient spending grew 19 percent in real terms. Medicare expenditures on physician-administered drugs blasted from $3 billion in 2000 to $17 billion in 2019—$15 billion of that for drugs that hadn’t existed at the century’s start.

Will all this technological innovation eventually bring down costs? That depends largely on policy choices.

The pharmaceutical industry offers a useful example. Americans often overpay for newly developed drugs that add little extra clinical value. Manufacturers trumpet incremental “innovations” to extend patent protection and delay competition that would otherwise push prices lower. Yet when drug markets work as intended, few industries offer better value for money—keeping patients healthy and reducing the need for expensive, labor-intensive hospital and specialty care.

After the introduction of Lipitor, for instance, cholesterol drug spending surged from $5 billion in 1995 to $35 billion in 2005. Following the expiration of the manufacturer’s patent, this figure fell to $10 billion per year, with generics accounting for 90 percent of consumption. As generic heart medications have become more widely available, the number of heart surgeries has declined, deaths from heart disease have dropped by 35 percent, and overall real per-capita spending on the treatment of heart disease has fallen—even as America’s obesity rate has continued to increase.

Many in Silicon Valley argue that artificial intelligence will similarly improve treatment outcomes while greatly reducing costs. Recent advances in language processing allow computers to digest vast and rapidly expanding bodies of medical research. Improvements in image recognition enable them to detect diagnostic signals that clinicians often overlook. Powerful processors can now integrate these data with patient-specific genetic information to identify rare diseases and reduce diagnostic errors. At the same time, progress in robotics has made surgery more precise while reducing reliance on costly skilled labor.

“When people buy their own insurance, they flock to plans that exclude the costliest hospitals—reducing prices for outpatient procedures. ”

Safety concerns will likely limit the deployment of fully autonomous health-care technology, at least for now. AI systems inherit the limitations of the data they employ, often draw false conclusions, and proceed without appropriate caution. The consequences of misdiagnosis can be catastrophic. Automated feedback loops, which function without the transparency needed for effective oversight, could amplify them. And dependence on electronic systems greatly magnifies cybersecurity and privacy risks.

Even so, AI could still reduce costs. Automating routine or repetitive clinical and administrative tasks could boost physician productivity, letting doctors treat more patients per hour. Decision-support systems might expand the scope of practice for lower-cost clinicians—allowing primary-care physicians to handle cases once referred to specialists—and nurses to perform tasks previously reserved for doctors. Telehealth and remote surgery could even enable the offshoring of medical services.

The danger, however, is that medical AI could simply layer new expenses atop existing ones instead of replacing them. Practitioners are likely to resist technologies that threaten their incomes, and the combination of restrictive licensing rules and the insurance-based financing of most care makes the health-care sector especially hard to disrupt.

Tech firms would not mind such an outcome. Many have lobbied for add-on payments for the use of AI-powered medical devices in treating Medicare patients. But this approach would exacerbate some of health care’s most perverse incentives—with the government paying for inputs without regard to their clinical value, encouraging the creation of technologies that raise costs rather than reduce them.

A better approach would be for Medicare and Medicaid to pay indirectly for newly developed medical technologies. Federal programs should rely on insurers, hospitals, and physicians to use payments already provided to them to employ new cost-slashing treatments. Lawmakers should refuse to establish supplemental funding streams.

Incentives for private insurance to control health-care costs can also be improved by switching control over the purchase of insurance from employers to individual workers. Individuals are much more price-sensitive and willing to forgo extraneous expenses when spending their own money. (Think airline tickets.) When individuals buy their own health insurance, they flock to narrow-network plans that exclude the costliest hospitals—reducing prices for outpatient procedures by 26 percent.

Politicians would like to believe that rising health-insurance costs result from bloated hospital budgets, physician overpayments, or administrative waste—problems that could be trimmed away painlessly. But the reality is that health insurance is more expensive because Americans are consuming more, and costlier, medical services, a trend far tougher to reverse. So far, under current payment systems, technological improvements have mostly driven expenditures higher. Smart reforms could shift innovators’ focus toward reducing costs instead.

This article is part of “An Affordability Agenda,” a symposium that appears in City Journal_’s Winter 2026 issue._

Chris Pope is a senior fellow at the Manhattan Institute.

Photo: Patient demand is growing for innovative technologies and treatments. (Stacey Wescott/Chicago Tribune/Tribune News Service/Getty Images)

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories