Strategic Initiatives
12176 stories
·
45 followers

Energy Lessons of the Strait of Hormuz Standoff // The conflict could spell the doom of “quit oil” policies once and for all.

1 Share
  • Geopolitical Impact: Recent conflict at the Strait of Hormuz has disrupted approximately 20 percent of global oil supplies, marking a historical scale of interruption comparable to the 1973–74 Arab oil embargo.
  • Economic Necessity: Petroleum remains an essential, inflationary-sensitive commodity integral to global supply chains, modern industrial systems, and the production of renewable energy technology itself.
  • Failure of Transition Policies: Despite over $10 trillion in global spending on energy transitions, total global oil consumption has risen 30 percent, and per capita oil use remains largely unchanged.
  • Limited EV Impact: Current electric vehicle adoption remains at roughly 3 percent of the global fleet, with projections indicating that even a 50 percent conversion rate would reduce global oil demand by only 10 percent.
  • Securing Energy Independence: Resilience against future price shocks requires a strategic shift toward increasing diverse, geographically dispersed oil production rather than pursuing "quit oil" mandates.
  • Infrastructure Expansion: Enhancing energy security is achievable by rapidly constructing new, large-scale pipelines to bypass vulnerable chokepoints like the Strait of Hormuz, modeled by historical engineering successes.
  • Strategic Reserve Management: National security necessitates prioritizing the expansion of strategic oil reserves, as recent large-scale drawdowns have reduced the effectiveness of domestic buffers against supply shocks.
  • Policy Realignment: Emerging trends suggest a growing global consensus that prioritizes energy affordability and security over previous long-term climate-focused energy transition goals.

History records the seventeenth-century Battle of Hormuz as one of the biggest naval battles of that era, fought between the dominant powers of Portugal and an Anglo-Dutch alliance. Today’s battle at the Strait of Hormuz, as all observers now know, is at the dominant center of global energy flows. The war with Iran and closure of the strait has, predictably, halted nearly 20 percent of global oil in its tracks, which makes it history’s biggest oil-supply interruption.

For calibration, the Arab oil embargo of 1973–74, triggered by a different Middle East war, resulted in nearly 10 percent of global oil supply being taken off the market for five months. That disruption’s combination of duration and scale eventually led to a tripling of oil prices, a global recession, and in due course, a fusillade of misguided energy policies. If the Strait of Hormuz remains closed long enough, we’ll see similar consequences.

Finally, a reason to check your email.

Sign up for our free newsletter today.

First Name*
Last Name*
Email*
Sign Up
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply.
Thank you for signing up!

The first energy lesson of this battle should be obvious: oil remains vital to modern civilization. It isn’t just that petroleum is the world’s largest source of energy and most-traded commodity. Oil is essential in the supply chains of everything, from lattes to large language models. It is even critical for building the machines designed to replace oil.

Thus, when oil is expensive, it’s broadly inflationary. Without oil, systems everywhere grind to a halt.

However, advocates of the so-called energy transition are proffering a different lesson. For transitionists, the battle of Hormuz “underscores,” as Microsoft’s global head of energy recently said, the need for renewable energy. We are already hearing loud calls to accelerate or reanimate “quit oil” policies—funding alternative fuels, forcing fuel efficiency or conservation, and even implementing oil rationing or price controls.

There are many uncertainties about how and when the current battle of Hormuz will resolve. But when it comes to the ultimate fallout for energy markets, facts and history point to a different lesson than the transitionists’ “quit oil” nostrums.

The Strait of Hormuz (Photo by Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2025)

Consider the two key results from the more than $10 trillion spent in pursuit of the energy transition over the past two and a half decades. First, per capita global oil use remains unchanged. To be precise, it’s down a statistically irrelevant 2 percent in a world that today has far more people in a far bigger economy. Thus, secondly, total global oil use has risen 30 percent. To put that in context, just the increase in demand is equal to double the European Union’s total oil use.

None of the “quit oil” strategies worked, at least in terms of meaningfully reducing oil demands at any price, never mind at costs economies will tolerate. In short, the world is more dependent on oil now than when the grand and expensive energy-transition experiment began.

At the pinnacle of that experiment, the EV enthusiasms that generated massive subsidies and countless billions of dollars in automotive industry losses resulted in the global automobile fleet having just a 3 percent share that’s all-electric. Even if, one day, one-half of all cars on the planet convert to battery-only propulsion, simple arithmetic shows that this would reduce global oil use by barely 10 percent.

What is the alternative for de-risking economies from the inevitability of another oil price spike in the future? The answer is obvious: economic insulation comes from encouraging greater oil production with greater geographic diversity. And it comes, concurrently, from expanding the size of strategic oil storage, which in turn requires yet more production.

Both strategies entail a philosophy that inverts what policymakers in both Europe and the United States, as well as at the International Energy Agency (IEA), have been fixated on for over two decades. Fortunately for the world’s economies, the momentum is already shifting.

Take, for example, rising development of Argentina’s high-quality shale fields, similarly offshore South America (especially Guyana), and now, perhaps, Venezuela, as well as the offshore production of numerous African nations. Even Canada now seems eager to expand production and exports from its virtually unlimited oil sands.

There are also opportunities to diversify, even rapidly, the delivery paths in the Middle East away from the Strait of Hormuz. How? Build more pipelines to bypass the strait.

At this writing, up to 5 million barrels of oil per day are on track to be re-routed to Saudi Arabia’s 750-mile Petroline pipeline, which will take oil overland away from Persian Gulf ports to one on the Red Sea. The UAE, too, has a 250-mile pipeline that can carry nearly 2 million barrels per day, bypassing the strait. That combination will cover nearly half the trapped crude, but far more is needed, and soon.

History offers an example of how fast long-distance pipelines can be built. In 1942, immediately after the United States entered World War II, American engineers built the Big Inch pipeline from Texas to New Jersey at a rate approaching 10 miles per day. At that pace—which one can imagine modern engineers matching, especially in the EPA-free zone of the Middle East—it could take just a few months to build another 750-mile pipeline, or even two, bypassing the Strait of Hormuz entirely. One hopes that the construction of new Mideast pipelines is already planned, or even underway.

The Big Inch pipeline in 1942 (Photo: Bettmann / Contributor / Bettmann via Getty Images)

Storage, the other time-tested insurance against unexpected commodity interruptions, is a budgetary rather than technological issue. Last year, China announced plans to expand its domestic oil strategic storage to 1 billion barrels—roughly three months’ worth of that nation’s imports. The U.S. strategic reserve peaked at 700 million barrels a decade ago, hovering around that level until the Biden administration sold off about half of it to tamp down oil prices triggered by the combination of the Ukraine war and the inflation-inducing Inflation Reduction Act.

In the face of today’s oil crisis, the IEA and its member states (which includes the United States) announced an unprecedented 400-million-barrel reserve drawdown. While that alone can offset only a few weeks of congestion at the Strait of Hormuz, the pipeline bypass noted earlier will boost that to a few months of cushion. Doubtless, many political leaders and businesses would like a longer runway.

Odds are now high that many policymakers will reprioritize energy security over energy transition. That shift comes at the same time as the resurrection of the primacy of “affordability.” Political momentum, combined with engineering realities and market forces, may finally drain inflated enthusiasms for “quit oil” policies.

Mark P. Mills is a City Journal contributing editor, the executive director of the National Center for Energy Analytics, and author of The Cloud Revolution.Strait of Hormuz Iran oil 

Top Photo by Elke Scholiers/Getty Images

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

How to Break America’s Great Scientific Stagnation // President Trump’s selection of Jim O’Neill to head the National Science Foundation could open the next great chapter of discovery.

1 Share
  • Institutional stagnation: Risk-averse bureaucracies and university structures currently prioritize credentialed expertise and incremental research over groundbreaking scientific discovery.
  • Diminishing returns: Despite massive increases in federal R&D spending and higher publication volumes, the frequency of revolutionary scientific breakthroughs is in decline.
  • Talent mismanagement: Current grant distribution models heavily favor academic pedigree, existing reputation, and university prestige rather than identifying and backing high-potential creative talent.
  • Structural incentives: Universities frequently siphon off a majority of grant funds for administrative bloat and overhead, diverting essential resources away from active research and discovery.
  • Nomination of Jim O'Neill: The selection of a Silicon Valley financier to lead the National Science Foundation signals a shift toward prioritizing high-risk, high-reward research and meritocratic talent identification.
  • Proposed operational reforms: Recommended strategies for the NSF include streamlining the grant review process, reducing the age of first-time award recipients, and implementing scout-based selection to bypass traditional prestige-gatekeeping.
  • Strategic independence: Reform efforts aim to decouple public research from restrictive paywalls of academic journals and eliminate administrative indirect-cost siphons that prioritize institutional maintenance over scientific output.

The biggest factor holding back an American revolution in science is not money but talent identification. For more than half a century, risk-averse bureaucracies and universities have let bold ideas and promising discoveries wither on the vine under the guise of credentialed expertise and the virtues of peer-reviewed incrementalism.

The evidence for this Great Scientific Stagnation is substantial. Research productivity is declining sharply across many domains. Federal R&D spending is more than 30 times what it was in 1956, and more scientists are trained and more papers published than ever. Yet revolutionary breakthroughs are becoming rarer. Many peer-reviewed findings fail to replicate, and a high probability exists that the vast majority of papers (which no one reads) are full of false conclusions. Accusations of fraud in science are on the rise.

Finally, a reason to check your email.

Sign up for our free newsletter today.

First Name*
Last Name*
Email*
Sign Up
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply.
Thank you for signing up!

America must break through this chokepoint by focusing on its greatest resource: talent. It matters how, and to whom, we award grants. We should be working toward tapping the energy at the heart of the sun and hanging our achievements in the balance of the stars. Instead, federal science funding has often drifted elsewhere: on promoting insects as human food ($2.5 million), watching monkeys gamble ($3.7 million), observing brain-damaged cats walk on treadmills ($549,000), and sending cash to DEI bird watching clubs ($288,563).

Fortunately, American science may soon get a lot more exciting—faster, wilder, and even more rigorous. On March 2, President Trump nominated Silicon Valley financier Jim O’Neill as director of the National Science Foundation. O’Neill served most recently as Deputy Secretary of Health and Human Services and acting director of the Centers for Disease Control and Prevention.

With a budget of nearly $9 billion, the NSF determines which nonmedical scientists receive funding, which university labs are supported, and which frontiers advance. With O’Neill at the helm, the old slow-drip model of incremental, consensus-driven funding would get a much-needed shake-up.

But O’Neill must first clear the Senate confirmation process, and the opposition is already sharpening its knives. California Representative Zoe Lofgren, the leading Democrat on the House Committee on Science, Space, and Technology, told Science that O’Neill isn’t fit for the office. “[G]iven his track record at HHS and CDC under [HHS Secretary Robert F. Kennedy Jr.], Mr. O’Neill seems like a bad choice to lead the National Science Foundation, our nation’s premier scientific agency.”

Science ran a second article featuring scientists skeptical of O’Neill. Neal Lane, a former NSF director under President Bill Clinton and a physicist at Rice University, said, “I think it’s unfair to ask him to do the job”—largely because O’Neill lacks an advanced science degree. Michael Turner, a cosmologist at the University of Chicago, added that he sees O’Neill and the Trump administration’s direction as overly commercial and “shortsighted.”

Full disclosure: I owe O’Neill a great deal personally. In 2010, he introduced me to Peter Thiel, for whom he then worked as head of the Thiel Foundation and a research lead at his hedge fund. On my first day, September 27, 2010, we launched the Thiel Fellowship, which awarded 20 young people a year $100,000 grants—with two notable conditions: applicants had to be under 19 and agree to drop out of college.

Critics torched the Thiel Fellowship from the start. My favorite was Larry Summers, former president of Harvard, who called our program the “single most misdirected philanthropy of the decade.” Today, however, the fellowship is a strong predictor of future billionaires. Its grantees have generated more than $500 billion in value since the program began. Its hit rate—the share of fellows who start tech companies that reach $1 billion in market cap—exceeds that of top accelerators like Y Combinator and institutions like Harvard Business School.

What I learned running the fellowship with O’Neill, Thiel, and others during its first three years is that America has lost sight of what it takes to produce new inventions and discoveries. That failure has led those who run major institutions to misjudge talent. The people running Harvard or the NSF—the architects of the “Great Scientific Stagnation”—think in terms of buying prestige: the number of papers published, the status of the journals in which they appear, citation counts, the reputations of endorsing professors, prior grants, grant size, university affiliation, and Ph.D. pedigree.

If you’re head of the NSF, responsible for managing more than $10 billion in federal spending to advance science, your goal should be to buy discoveries, not prestige. And to buy discoveries, you need to find and fund creative genius, wherever it turns up.

When I see universities like Johns Hopkins or the University of Pennsylvania collecting billions in grants each year, I see evidence of a flawed understanding of where discoveries come from. True, their scientists produce solid work and, at times, important theories, but are they worth the billions the government sends them? If the “Great Scientific Stagnation” thesis is right—and the evidence suggests it is—the answer is no.

Why aren’t Americans getting a better bang for their science buck? Grant-makers often overlook talent because of perceived flaws: age, appearance, personality, among others. The supposed defects of Thiel Fellows were their youth and lack of a college degree, yet those attributes proved irrelevant. Our job was to predict outcomes and take big risks; the results speak for themselves.

When O’Neill got the nod to lead the NSF, I texted to ask how I could help. “Send me your ideas,” he replied. Here are five.

First, the NSF should adopt the Thiel Fellowship model (or develop its own) for identifying young, overlooked, and therefore undervalued talent. The central challenge is improving selection while relying on less evidence. At the Thiel Fellowship, we built models that assessed the raw character of an engineer or scientist and translated it into an estimate of strengths and likely success.

I emphasize “young” because creativity is perishable. As with athletes, there is a prime age range when the mind is most fecund and sharp. One of American science’s key chokepoints is that institutions don’t trust younger researchers to do great work. The average age of first-time grant recipients is about 40 to 45; that should drop by two decades.

The economist Benjamin Jones’s research, spanning the past century, shows that innovators are reaching their greatest achievements at increasingly older ages. That might be fine if creativity and productivity remained constant over a lifetime—but they do not. Late starts mean shorter careers, resulting, by Jones’s estimate, in a 30 percent decline in the potential for new discoveries and inventions. By analogy, imagine how unimpressive career statistics would be if Major League Baseball barred players from competing until age 30. Albert Einstein was 26 when he wrote the four papers that revolutionized physics in 1905. Isaac Newton was 23 or 24 when he developed calculus and the theory of gravity. The NSF should be looking in these age ranges for tomorrow’s talent.

Second, the NSF must speed up the grant-review process. As it stands, it’s a circus. Multiple studies find that scientists spend 20 percent to 40 percent of their working hours preparing applications that have only about a one-in-five chance of success and take far too long to process. Endless peer-review rituals and elaborate decision procedures, neither of which improve outcomes, slow the pace of discovery.

We should accept a higher risk of flubs, blind alleys, and dead ends in exchange for a better shot at major breakthroughs. Two ideas are worth testing. One is a partial lottery: randomly select from applications that clear a quick initial screen. The other would be scout programs: recruit a rotating network of proven agents—working scientists, professors, independent thinkers—with firsthand knowledge of emerging talent. No interviews or prestige proxies. Give credentials near-zero weight. Scouts would simply select recipients, who wouldn’t even know they were being evaluated until the funding arrived. To avoid entrenched patronage, scouts would serve two-year terms and then pass their authority to a peer outside their immediate professional circle.

Third, measure what matters: the expected magnitude of discovery, not the expected number of citations. Create a “renegade scientist” grant program that explicitly rewards risky, interdisciplinary, even unconventional, proposals that peer review tends to kill. Pair it with a clear list of major unsolved problems that the grants are meant to tackle.

Fourth, use the NSF’s funding and authority to break the cartel of prestigious scientific journals. Taxpayers fund the research, only to have access sold back to them at exorbitant prices. The NSF should use its leverage to free that work from paywalled journals by requiring that all government-funded research be publicly accessible.

Fifth, fix the incentives. Cap or eliminate indirect-cost siphons to universities, especially those with endowments in the tens of billions. Fund people, not buildings. Today, if a university scientist wins a grant, the university claws back more than half of it for “indirect costs” or administrative overhead. This rake-off goes not to the actual business of scientific discovery but to salaries for DEI officers or planting flowers in the quad. The NSF should also require an “idiot index”—a comparison between a scientist’s estimate of an experiment’s cost and the university’s. The aim is to drive spending toward tools and lab space, not bureaucracy.

Do these five things, and the NSF will become the fire for American ingenuity instead of being a steward of stagnation. We have the money and the talent. What’s missing is the courage to stop buying prestige and start buying discoveries.

Jim O’Neill has spent his career proving he can spot that courage in others. Now he should get the shot to institutionalize it. The Senate should confirm him so the next chapter of revolutionary science can begin. As he wrote in an X post announcing his nomination, “Entropy is on the march and China is not waiting.”

Photo: Raquel Natalicchio/Houston Chronicle via Getty Images

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Supreme Court Limits Liability for Internet Service Providers - WSJ

1 Share
  • Supreme Court Decision: The Supreme Court Unanimously Ruled In Favor Of Cox Communications Regarding Liability For Customer Copyright Infringement
  • Liability Standard: Internet Service Providers Are Only Liable For Copyright Violations If They Induce Illegal Activity Or Tailor Their Services Specifically For Such Acts
  • Judicial Reasoning: Justice Clarence Thomas Stated That Providing Public Access To The Internet Does Not Make A Company Automatically Responsible For Individual Subscriber Misconduct
  • Industry Support: Major Tech Corporations Including Google Amazon And Microsoft Supported The Ruling To Protect Against Potential Financial Risks Associated With User Actions
  • Legal Conflict: Sony Music Entertainment And Other Record Labels Initially Sued Cox Seeking Damages For Alleged Failures To Address Repeat Copyright Infringers
  • Damage Claims: The Music Industry Previously Won A One Billion Dollar Jury Award At The Trial Level Which Was Later Overturned By A Federal Appeals Court
  • Company Stance: Cox Communications Maintained That Internet Service Providers Should Not Function As Copyright Police And Denied Inaccurate Monitoring Capabilities
  • Court Alignment: The Nine To Zero Decision Reinforced Protections For Service Providers While Two Justices Noted The Ruling Scope Was Broad

By

Lydia Wheeler

4


Cherry blossoms in bloom with the US Supreme Court building in the background.

The Supreme Court decision for Cox was unanimous. Evan Vucci/Reuters

WASHINGTON—The Supreme Court saved Cox Communications from having to pay steep monetary damages for not terminating the internet service of customers caught pirating music online, in a decision that provides new liability protections for internet service providers.

The court, in an opinion Wednesday by Justice Clarence Thomas, ruled that an internet service provider is only liable for its customers’ copyright infringement if it intended for its service to be used that way or is tailored for illegal activity.

“Under our precedents, a company is not liable as a copyright infringer for merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights,” Thomas wrote.

“Cox did not induce or encourage its subscribers to infringe in any manner,” he wrote.

The decision for Cox was 9-0, though two justices, Sonia Sotomayor and Ketanji Brown Jackson, said the court’s ruling for the company was broader than it needed to be. 

Alphabet’s Google, Amazon and Microsoft were among a group of companies that filed a brief in support of Cox, warning that changing the standard of liability would put them at financial risk if people misuse their products and services. 

Sony Music Entertainment led a group of more than 50 record companies and music publishers in suing Cox for allegedly turning a blind eye to its users’ illegal activity and won big at the trial level with a jury awarding $1 billion in damages. A federal appeals court tossed out that award, but Cox still had been facing significant financial liability in further proceedings.

Sony and the other music labels alleged Cox helped 60,000 of its subscribers distribute more than 10,000 copyrighted works for free from artists ranging from Bob Dylan and Bruce Springsteen to Beyoncé and Eminem.

The music industry uses a service that flags repeat offenders and sends notices to that user’s internet service provider. Cox initially had a three strike rule for customers, but Sony alleged the company watered down its consequences for infringers to just a stern warning over time. 

Cox argued there was no way to know if the infringement notices it receives are accurate or reliably identify customers who make illegal downloads.

The company said its hard-fought victory had “definitively shut down the music industry’s aspirations” of evicting people from the internet. “This opinion affirms that Internet service providers are not copyright police and should not be held liable for the actions of their customers,” Cox said.

Sony and attorneys for the music publishers didn’t respond to requests for comment.

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Appeared in the March 26, 2026, print edition as 'Supreme Court Curbs Internet Providers’ Pirated-Music Liability'.

Lydia Wheeler is a legal affairs reporter in The Wall Street Journal’s Washington, D.C., bureau. She previously covered the Supreme Court and a range of legal issues at Bloomberg Law and The Hill.


What to Read Next

[

The Texas Lawyer and Part-Time Pastor Who Beat Meta and Google

](https://www.wsj.com/us-news/law/the-texas-lawyer-and-part-time-pastor-who-beat-meta-and-google-82c8521b?mod=WTRN_pos1)

[

Plaintiff’s attorney Mark Lanier uses props and parables to challenge social-media giants, drugmakers and manufacturers of products containing asbestos.

](https://www.wsj.com/us-news/law/the-texas-lawyer-and-part-time-pastor-who-beat-meta-and-google-82c8521b?mod=WTRN_pos1)

Continue To Article


[

This Ex-Physicist Might Be the Future of Music. He’s Also Enemy No. 1

](https://www.wsj.com/arts-culture/music/suno-ceo-mikey-shulman-ai-music-512b78da?mod=WTRN_pos2)

[

Is this the end of music as we know it?

](https://www.wsj.com/arts-culture/music/suno-ceo-mikey-shulman-ai-music-512b78da?mod=WTRN_pos2)

Continue To Article


[

Meta Is in Risky Legal Territory After Back-to-Back Court Losses

](https://www.wsj.com/tech/do-back-to-back-courtroom-losses-herald-metas-big-tobacco-moment-57e6f227?mod=WTRN_pos4)

[

Social-media giants are confronting a potential flood of litigation challenging the design of their products.

](https://www.wsj.com/tech/do-back-to-back-courtroom-losses-herald-metas-big-tobacco-moment-57e6f227?mod=WTRN_pos4)

Continue To Article


[

Court Dismisses X Lawsuit Alleging Brands Illegally Boycotted the Platform

](https://www.wsj.com/business/media/court-dismisses-x-lawsuit-alleging-brands-illegally-boycotted-the-platform-b2c0d24e?mod=WTRN_pos5)

[

The suit accused companies including CVS Health and Lego of conspiring to withhold ad spending over X’s content policies.

](https://www.wsj.com/business/media/court-dismisses-x-lawsuit-alleging-brands-illegally-boycotted-the-platform-b2c0d24e?mod=WTRN_pos5)

Continue To Article


[

Meta and YouTube Lose Landmark Social-Media Addiction Trial

](https://www.wsj.com/tech/meta-and-youtube-lose-landmark-social-media-trial-33e4c5cb?mod=WTRN_pos6)

[

Jurors found the companies were negligent, and the design of their apps caused harm to children.

](https://www.wsj.com/tech/meta-and-youtube-lose-landmark-social-media-trial-33e4c5cb?mod=WTRN_pos6)

Continue To Article


[

The Reclusive 94-Year-Old Who Just Sold His Food Empire for $29 Billion

](https://www.wsj.com/business/retail/the-reclusive-94-year-old-who-just-sold-his-food-empire-for-29-billion-99507de5?mod=WTRN_pos7)

[

Nathan Kirsh built Restaurant Depot from one warehouse in Brooklyn in 1976 after getting his business start in South Africa.

](https://www.wsj.com/business/retail/the-reclusive-94-year-old-who-just-sold-his-food-empire-for-29-billion-99507de5?mod=WTRN_pos7)

Continue To Article


[

Social media is now a massive liability for Meta, Google and the rest of Big Tech

](https://www.marketwatch.com/story/big-tech-deserves-its-big-tobacco-moment-social-media-is-now-a-massive-liability-for-meta-and-google-5d81bd70?mod=WTRN_pos8)

[

Landmark verdicts shatter the Section 230 shield, turning ‘addictive’ product design into a legal thicket for Meta, Alphabet and others.

](https://www.marketwatch.com/story/big-tech-deserves-its-big-tobacco-moment-social-media-is-now-a-massive-liability-for-meta-and-google-5d81bd70?mod=WTRN_pos8)

Continue To Article


[

The Cool Kids Are Gardening, Here’s How to Get Started

](https://www.mansionglobal.com/articles/the-cool-kids-are-gardening-heres-how-to-get-started-c1dd9be6?mod=WTRN_pos9)

[

Looking to get your hands dirty? Here are some easy ways to get into the hobby, even if you don’t have a yard.

](https://www.mansionglobal.com/articles/the-cool-kids-are-gardening-heres-how-to-get-started-c1dd9be6?mod=WTRN_pos9)

Continue To Article



Videos

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Vibe Maintainer | by Steve Yegge - Freedium

1 Share
  • Vibe Maintenance Concept: Managing Large Volumes Of Ai Generated Pull Requests By Integrating Ai Tools Into The Project Maintenance Workflow
  • Community Throughput Optimization: Prioritizing High Merge Rates To Foster Engagement And Prevent Project Forking Within The Open Source Ecosystem
  • Automated Decision Tree: Utilizing A System Of Agents To Categorize Contributions Into Easy Wins Fix Merges Or Needs Review Statuses
  • Proactive Maintenance Strategy: Fixing Substandard Community Submissions Directly Instead Of Requesting Changes To Ensure Rapid Feature Delivery
  • Strategic Fork Avoidance: Evaluating The Risk Of Contributor Dissatisfaction To Retain Users And Centralize Software Development Efforts
  • Lightweight Hygiene Rules: Implementing Guidelines Regarding Cross Project Pollution Framework Cognition And Plugin Usage To Maintain Code Quality
  • Human Judgment Necessity: Retaining Manual Review For Final Complex Features Where Current Ai Models Lack Sophisticated Design Taste
  • Technological Foundation Integration: Leveraging Specialized Stacks Like Meow And Dolt To Enable Version Controlled And Queryable Agent Operations

Some attendees at an AI Tinkerers meetup in early Feb were asking me what it's like to be the maintainer of a big OSS project where the community PRs are all AI slop. They thought it would make for a good blog post. I thought so too, at the time!

It turned out to be very, very, very hard to write down. It's in many ways the opposite of conventional wisdom for software maintainers, OSS or otherwise. So this post has given me over 2 months of writer's block. (I had to update that duration many times while writing this.)

Why is it so important for me to tell you how I deal with a storm of AI-generated PRs? Because I'm beginning to believe that my "vibe maintainer" workflow, crazy as it might sound, will be what a lot of you are doing before long. Everyone who works on successful OSS will soon have to deal with PR storms.

None

Vibe Maintainer: Using AI to help manage AI-generated PRs

The Rising Tide

To give you a sense of the scale I work at, I'm cruising towards 50 contributor PRs a day, combined between Beads (20k stars, 5 months old) and Gas Town (13k stars, 3 months old). That's seven days a week; if I take a day off, they pile up and I may have to deal with 100 or more in a single day.

It's an enthusiastic community. We've had over 1000 unique contributors between the two repos, with over 4k PRs (2300+ merged), and over 15k commits, all in just a few months. And we have a great community with almost two thousand Gas Town users hanging out chatting on the Gas Town Hall Discord.

Through all that, my median time to resolution is about 15 hours, with few PRs waiting more than a few days. This is high velocity. But I still manage to keep my quality bar high enough that both projects continue to exhibit strong growth. It may look like I'm only merging 60% of the PRs from the metrics, but that's an artifact of fix-merging. I actually merge about 88% of all incoming contributor PRs, one way or another, and both projects are flourishing from it.

Beads in particular is well-integrated with the broader ecosystem — for instance, it now has solid integration with GitHub, GitLab, Linear, JIRA, Azure DevOps, Notion, and five self-hosted storage options. This is all encapsulated as a rich plugin interface for backend engines that each of them implement. Nearly all of that was from community contributions.

It's safe to say people love Beads and Gas Town, even though mostly only agents have ever seen their source code. I certainly haven't. Why the fuck would I look at their code, I'm already pretty busy. No time for field trips.

I'm a very lazy person, and maintaining a popular OSS repo, let alone two of them, would simply not have been possible for me, like ever, up until maybe a year ago. I'm getting by with AI, that's the only way. As my PR volume has increased, I've been able to keep afloat through model and tool improvements, automating as much of the decision tree as I can.

Even with AI help, keeping up with community PRs takes me 15–20 hours a week, usually 2 to 3 hours a day. Sometimes much, much more.

I wish I could tell you it's easy work. I have managed to automate all the easy stuff, which is about half the PRs. I was recently inspired by Dane Poyzer's gt-toolkit package a series of Gas Town formulas he published, which now has its own little user subcommunity. Dane's formulas help you run his ambitious long running idea-to-delivery workflow, which is geared at comprehensive feature development and moving through mountains of work with his Gas Towns. My own PR workflow is now a formula as well.

Before we dive into the vibe maintainer workflow, let's revisit why it's needed at all. Am I not just bringing this on myself by allowing AI-assisted PRs in the first place?

None

Live of a Vibe Maintainer: Gems and Dead Birds

Saying No to AI: The "Fork You" Problem

Since 99% of my incoming PR submissions are AI-generated, it stands to reason that I could reduce my workload by 99% by saying "No AI PRs." In that world, instead of doing all this crazy vibe maintainer stuff, I'd just wake up every morning, brew some coffee, shake my fist at the sky, browse HN, and maybe take the mule to town. The easy life.

Most OSS maintainers go this route. They straight-up forbid AI-generated or even AI-assisted pull requests. And I can understand why. The crap you see in AI-generated PRs these days can turn you into the Clint Eastwood angry-porch meme, practically overnight. Rather than deal with it, they outlaw it.

That of course triggers an arms race. You can get an AI-assisted PR accepted, but only if you sneak it in. So we hear comical stories of once-rejected PRs suddenly being accepted after they're resubmitted with all the AI DNA scrubbed from the crime scene, hee hee haw haw.

But that's the official party line for most OSS projects today: "No AI." It all has to be done by sneaking. The whole status quo in open-source is characterized by historic levels of silliness.

Here's the problem with that old-school approach. We are headed toward a world in which if you refuse enough PRs, the community will consider you a dead-end street and begin routing around you. They will copy your software, either by forking it or rewriting it from the ground up, and use their own mutated version from then on. They might even develop a community around their version.

Software survival is now about velocity–specifically, keeping up with what your users want. Even a permissively-licensed OSS project can be forked and lose a ton of users and mindshare, if the project owners don't listen to a sufficiently large subcommunity of their happy users.

As just one of many easy examples, Roo Code is a big community that forked off Cline (which itself is a fork of VS Code). And now people are forking Roo; I've seen some sizable ones taking shape. That's a sad list of at least three communities effectively fighting over a code base. They could have united, but none of them could keep up with what their own users wanted, so it keeps on forking.

That particular fork family all happened before coding agents came into their own in late 2025. It was hard back then. Now that everyone on earth has access to powerful coding agents, we will see way more forks. Forking used to be a declaration of war. Now it's simply a declaration that someone liked your software enough to want to change it, but you said No.

It's always been trivial to create a fork. The hard part has always been maintaining the fork. As the fork's community grows, so does the maintenance burden. It used to take a good-sized team to support a fork, which was almost like a rival gang. Nobody liked a fork. There was always bad blood. The XEmacs/Emacs rivalry was absolutely the stuff of legend. You wouldn't believe some of it.

It used to basically require corporate backing to have a credible fork of a large OSS project. Today, things are astoundingly different. With the coding agents of 2026, everyone who loves your software is a credible threat to forking you. Any grandma who wants to use your software for gardening could build a massive grandma subcommunity with your shit if you don't take her PRs. She might not even know she's done it. It's just there for the taking, yo, and you wouldn't flex.

There's nothing wrong with forks per se; that's how evolution will happen. But it's also a ton of duplication that might not have been necessary. A lot of people just want to add a small feature or fix to an existing software product, not clone the whole damn thing and maintain it for the rest of their lives. They want to share tokens and energy by pooling features and fixes together, as a community. So there are good reasons to avoid forking.

I do actively encourage forks where people are trying to take the code in a direction I just can't follow. The earliest big fork was beads_rust from Jeffrey Emanuel, who told me he was very embarrassed to be forking my code, until we chatted about it and I gave it my heartfelt blessing.

This was a situation where he wanted only the streamlined original code's behavior. Nothing else. And that's one thing I couldn't offer with my kitchen-sink approach to making my tool community- and AI-friendly. So I was happy he made that fork for those who need that streamlining.

If your software is popular, and you want to avoid forking (or rewriting), you need to build and foster your community. You need to let your system expand to accommodate the needs of as many users as practical.

People will ask more and more of your software, so you also have to decide where to draw your lines. You choose what belongs in your code, and what to exclude–any of which may go off to live in a fork somewhere to "compete" with you. Choose wisely!

The Gravitational-Well Finish Line

My own approach is radically different from how most OSS maintainers work: I say Yes to AI. Instead of rejecting AI submissions, I encourage everyone to use AI to submit their PRs (subject to a growing list of hygiene rules). Indeed, I both observe and expect 99% of incoming PRs to be AI-assisted.

Why? Because this empowers my users to turn their wishes into code and get it into the system. It keeps them from thinking about off-ramps. It keeps them from forking me, and unites them into a larger community, which means they all benefit from sharing rather than reimplementing. It's a net token savings, so software with strong communities will tend to survive.

Do some contributor PRs belong in a fork? Absolutely! I maintain high standards for what goes into the Beads and Gas Town core. I'll reject PRs for many reasons, including being too opinionated, niche, or they don't pull their tech-debt weight.

But if there's a germ of a good idea in there, I try hard to find it and cultivate it.

Instead of requiring perfect PRs from everyone, I aim to find a quick resolution that is satisfying to all parties. I accept most PRs, but still maintain hard lines on architecture, what goes in core, code quality, and many other AI-era design principles (e.g. ZFC). If I were to send every PR back to the contributor for fixes, the rest of the community might be losing out on some important fix or feature for days to weeks. And there it is, sitting there in the PR; it just has issues with it.

In this situation, if you want to maximize throughput, then you may need to fix the contributor's code yourself before it can be merged. Most OSS maintainers say, "Go fix your code." I try my best to fix it myself and get it merged. There's an art to this that I'll discuss below.

My core philosophy is, help contributors get to the finish line. I optimize for community throughput. I review every PR and try to find the value in it, and have my worker agents do something appropriate for each one.

The PR Sheriff

My sheriff workflow consists of runs, where I try to resolve all PRs in a run, though I'm not always able. A run kicks off automatically every time I restart my designated sheriff crew members in my Gas Town, because I place a "sheriff bead" on their hook. So they notice it when they wake up. I can also start runs manually.

I discovered today, after many months of using this workflow with my Gas Town crew members, that the Mayor is actually way better at it. Like, far, far better, it's crazy how much better it is. The Mayor, acting as PR Sheriff, takes a more holistic view, makes better decisions, and makes better use of Gas Town resources to get the PRs reviewed, fix-merged, and escalated in parallel.

None

The Mayor is my new PR Sheriff for my rigs

This was big news. It means I can shrink my Gas Town crew down to maybe 3 agents per rig (from 8 apiece), which will be huge for memory pressure. Huge for running it in parallel with Gas City while I cut over. More on that later.

Anyway, the workflow begins with the sheriff pulling descriptions of all open PRs, and categorizing them into easy-wins, fix-merge, and needs-review.

Easy wins are things like targeted bug fixes, doc updates, dependency bot auto-upgrades, and automatically closing drafts, PRs from banned contributors, etc. These are handled automatically every 2 hours with a patrol, which contributes to my sub-day median turnaround time. And they're handled automatically during a PR sheriff run.

The first fix-merge candidates are easy wins that are broken for some reason — they fail CI, they need a rebase, or they have a simple error in them, but aside from that, they fit all the easy-win criteria. The sheriff may decide to auto-fix-merge those. My Mayor decided to sling them to polecats, which was nice.

Needs-review is any PR that looks kind of suspicious for some reason, so we're going to have to have an agent suss it out, do a deep dive, and produce a report. These can be farmed to crew members or polecats, as the instructions are usually pretty simple. The reports can be handled however you like, e.g. having the sheriff summarize them for you. Or I sometimes go directly to the agents' tmux sessions and read their reports.

From needs-review, we get a set of possible recommendations:

  • Easy win. Oops, it turned out to fit the easy-win criteria after all. Happens sometimes.
  • Merge. Agent recommends to human that we merge this PR. It may be small or big, but it is well-tested, broadly useful, well-documented, and good to go.
  • Merge-fix. It's mergeable but we need to fix some issues afterward. But it's OK to do it in a follow-up commit. We merge the PR as-is, then push a follow-up fix to main.
  • Fix-merge. It's pretty busted, so we're going to pull it locally, make a bunch of changes, and then we'll push it; you will see the contributor attribution in the CHANGELOG.
  • Cherry-pick. The PR contains M items (features and/or fixes) and we only want N < M of them. We cherry pick the N things locally, fix them as needed, and commit them with attribution. We close the PR, effectively throwing the rest away, with an explanation.
  • Split-merge. The PR contains M items, but they're separate concerns, and really should all have been in separate PRs. We pull it locally, split them into separate commits, and push them all with attribution to the original contributor.
  • Reimplement. The PR is essentially rejected, perhaps because I don't like its design. But it was trying to solve some fundamental problem. So we see if we can find a better design, and if so, we implement it that way. We then close the PR thanking them and letting them know how we solved it.
  • Retire. This PR is obsolete; it may have been superseded by another PR (often from the same author, interestingly), or fixed by some other mechanism. Close it with a thank-you.
  • Reject. This may be a feature that does not pay its weight in tech debt, or one that is too niche to include in the core. Or it might be a design that does not fit my standards. Close the PR with a polite note to the sender.
  • Request changes. Last resort. This can lead to contributor starvation, so there's almost never a good reason to do this, but I do use it occasionally.

There are a few other possible outcomes, such as re-routing the PR to the right project, banning the contributor, etc. But this is a pretty good starter list.

None

The PR decision tree has many outcomes

Notice that the first resort for almost every OSS maintainer, which is to send your PR back requesting changes, is the last resort in my vibe maintainer workflow. It ranks even lower than rejection, which itself is very serious, because rejection can lead to forking. And it's cumulative. The more PRs you reject, the higher the chance of someone getting fed up with you.

Requesting changes is unfortunately the last resort because is quickly leads to contributor starvation; if you keep making them rebase it, the sheer velocity of the project can prevent their PR from landing for weeks, until you take steps to help it along. So you might as well help it right from the start. Don't send it back for changes.

If there is good in the PR, then you should absorb that good into your code base, right there and then, rejecting anything you don't like, and transforming the parts you're absorbing. You can make bug fixes, architectural fixes, change the naming, make it a plugin, mix and match.

Most of the time, it's Claude telling you, hey, this PR is mostly healthy but it's missing a kidney, and you say, please add the kidney.

But sometimes, Claude looks at PR that adds a face-hugging alien to each worker, and it sizes it up, and says, "This PR is well-constructed, and the alien is robustly hugged to the agent's face, with good test coverage and updates to all relevant work formulas."

And you say, Claude, it's a fucking face-hugging alien. And Claude says, oh right, that's a very good point, we probably don't want that, shall I close it with a polite note?

And that's why the last 25% or so of pull requests need human review. At least, so far. It's because there's still a thing called taste that current models can't be trusted with. Not yet.

None

Fix-merging: Keep the good parts, give contributor credit

PR Hygiene

I've instituted some lightweight hygiene rules for contributors. I don't enforce them yet, and I've only announced them in a few places. I'm working to get them baked into the CONTRIBUTOR.md files and other important locations.

Here are some examples of hygiene rules for my repos:

  • Cross-project pollution: Beads must not know about Gas Town. Do not put Gas Town concepts into Beads. Gas Town doesn't know about Gas City, and the Wasteland doesn't know about any of them.
  • Zero Framework Cognition: Read it, learn it, live it. I wrote a blog post about it. I mean it.
  • Use plugins whenever possible. Do not put stuff in core if there is a way to do it with integrations/extensions/plugins.
  • Don't submit drafts. I'll just close them.
  • One concern per PR. Split up large PRs.
  • Remove all unnecessary files. Minimalist. Make the fewest changes possible.
  • Rebase. Don't use an old fork; rebase right before you submit the PR.

Right now, my policy is to fix all these things, and just complain about them when I close the PR. As the models get smarter, more of this can be handled automatically. But in general, I think people who abuse it might start getting banned, since they're basically pushing their QA for their PRs (and associated costs for fixing them) onto me.

Summing Up

That's a pretty good overview of the PR workflow. I know I said it was hard to write down. It was, dammit.

When you get down to the last 5–10% of the PRs, they're usually hefty features, and you may need to spend a lot of time digging into them and figuring out whether you really want to pull them in. You may ask the agent a ton of questions about each PR, ask it to consider alternatives, or just tell it you don't see the point. If the agent can't justify the PR, then it's probably not worth taking yet.

But that last 5% to 10% is the part that takes hours a day. There's always a list of PRs that are just right on the edge and need my judgment call, though I wish it weren't so. It is definitely getting easier now that Gas Town is becoming essentially feature-complete, due to the launch of Gas City. So I auto-close any large features and redirect the contributors to Gas City.

Summing it all up, being a vibe maintainer means trying to absorb the good parts from every PR, and there are various techniques for approaching it, depending on what's wrong with the PR. It means only rejecting PRs with no alternative as a last resort, since each rejection increases the chances that someone's going to fork you.

Most of all, it means helping contributors get to the finish line, with attribution, in whatever way you deem best for your projects and repos. Maximize community throughput, and you'll have a happy and thriving community. In fact, I'm pleased to report that Beads crossed from 19.9k to 20k stars on GitHub as I was finishing this draft tonight.

My main success metric is whether my users are happy, and so far it's looking pretty good!

Speaking of Gas City

Gas City went to alpha last week, and aims to be generally available later in April. What's Gas City, you ask?

Gas City is a ground-up rewrite of Gas Town from first principles, using the MEOW stack (i.e., Beads and Dolt, the fundamental substrate of Gas Town.) It is almost a perfect proper superset of Gas Town's features, and can be used as a drop-in replacement for Gas Town — except Gas City is also an orchestrator-builder.

Gas Town is a "pack" within Gas City, a fully declarative bundle of prompts and skills, no code at all — it still has the iconic original Mayor, Deacon, Dogs, Polecats, Witness, Refinery, and Crew, with their various hooks, inboxes, skills, prompts, and sandboxes. But there's no dedicated code — unlike in the OG Gas Town, which is a large kitchen-sink binary.

Gas City was built by my buddies Julian Knutsen and Chris Sells, and as far as I can tell, it is exactly what I envisioned and outlined to them when they first suggested tackling it. They did a bang-up job. So good that Gas Town itself, the binary, is I think not long for this earth. Only its shape, the original characters, and all the individual features we loved about Gas Town remain — all available as LEGO-like pieces for creating your own agent orchestration shapes.

Beads, the original, powered fully by Dolt (Git for structured data), will live on as version 1.0. The MEOW stack from Gas Town, including formulas, molecules, hooks, the GUPP engine, and Nondeterministic Idempotence (NDI) are all implemented in Beads and Dolt. That means that all work is decomposed into durable, version-controlled, SQL-queryable orchestration steps.

That's all stuff that your orchestrator doesn't have, especially if you're using something Claw-based…unless you're also using Beads, or else you reimplemented the whole damn MEOW stack, including Dolt, which would just be incredibly foolish of you.

I think the MEOW stack and Dolt give both Gas Town and Gas City a gargantuan advantage over anyone using postgres and snapshots, or git-lfs, or really anything other than Dolt or Datomic for their agentic memory system. If you're using Datomic, bravo. You've got a nice system with everything agents need for world-class forensics and mistake-recovery in production. I just prefer Dolt because it uses Git as its protocol. Both are great.

If you think of Gas Town as a Dark Factory (it is!), then Gas City is a Dark Factory Factory. I am putting my money where my mouth is, and diving in to start orchestrating my own game's production systems using Gas City as my new SREs. I want to live this a bit before I evangelize it further.

Stay tuned for several upcoming announcements and new blog posts from me. I've been sitting on quite a backlog while I was trying to squeeze this one out. Unnnggghhh. I've got a post about dark factories coming up, including some insights into how to make a coding agent into the perfect dark factory worker — something I think coding agents will all need soon just to survive. But as a dark factory user, I'm biased, so who knows.

I've mentioned the Wasteland, and there's more coming there. There has been a bunch of work behind the scenes, and it's an important building block. Our two thousand Discord users are basically an army. We're sitting on that army and it's restless. I'm headed to Portland in the morning for a meeting of our generals, called by Chris Sells, and one of the topics for discussion is how best to put that army to use. Fun times.

I've also got upcoming announcements about Gas Town and Beads, both of which are headed to v1.0 very soon. And a blog post in the form of a totally untrue made-up fantasy-horror campfire story about a fake monster called the SaaS-Eater, whose wings shall beat a mighty Saas Hurricane, whose wind will de-SaaS companies and save them millions of storytale coins, which is all of course utter hogwash, merely an amusing fiction for scaring small children and investors and the general public. But I'm a horror fan, so we'll see. Maybe an army could beat it. Or create it. Now that my Vibe Maintainer post is out, I can finally get to that blog backlog, and try my hand at some proper fiction.

See you next time. And to all of you who, one way or another, manage to read everything I write — thank you! When I hear those stories it really does help with the writer's block. I hope you're all having fun with agents and orchestrators. I'm absolutely having the time of my life. More posts coming soon!

Don't forget to come visit our Discord at gastownhall.ai!

None

On to Gas City in April!

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Social Media Hurts Kids’ Brains. Or Maybe Not?

1 Share
  • Academic Debate: Recent social science research presents conflicting findings regarding the impact of social media usage on the cognitive development of children.
  • Primary Study: An analysis using Adolescent Brain Cognitive Development data suggests that increased social media consumption during tween years correlates with lower verbal and spatial memory scores.
  • Methodological Critique: Independent data analysis by Jordan Lasker disputes these findings, asserting that alternative statistical models show little evidence of negative cognitive effects.
  • Research Methodology: The original research utilized longitudinal data to track cognitive performance fluctuations alongside changing social media habits across three usage-intensity groups.
  • Comparative Analysis: Critics argue for more rigorous evaluation methods, such as sibling comparisons and within-individual longitudinal assessments, to control for familial and home environment variables.
  • Scientific Uncertainty: Re-evaluating the data with more granular models renders most previous findings statistically insignificant, highlighting the difficulty in establishing definitive causal links.
  • Variable Interpretations: Even within recalculated models, minor negative associations persist, though researchers suggest these may stem from broader family-level trends rather than direct screen-time impact.
  • Ongoing Investigation: The lack of consensus among researchers underscores the complexity of utilizing large datasets to measure the long-term cognitive consequences of digital device exposure.

[

man holding smartphone taking photo

](https://images.unsplash.com/photo-1546052725-72b0b8b9f4b6?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1Mnx8c29jaWFsJTIwbWVkaWElMjB0ZWVuc3xlbnwwfHx8fDE3NzQ2MzAyODB8MA&ixlib=rb-4.1.0&q=80&w=1080)

Courtesy Joel Mott/Unsplash

Kids’ cell-phone use is one of the hotter social-science topics these days, and one we’ve touched on here before. A recent kerfuffle shows why the debate is sure to keep raging for a long time.

The story begins with a study published in The Lancet Regional Health, which got some extra attention thanks to a tweet by Jonathan Haidt—who’s led the intellectual charge against kids’ overuse of tech devices.

In the study’s analysis, the kids who most heavily increase their use of social media during their tween years tend to have weaker cognitive skills, including verbal and spatial memory, than similar kids who stay off social networks. However, the study faced an almost immediate response from the prominent data blogger Jordan Lasker, better known as Crémieux, who ran different models on the same dataset and argued there’s little sign of an effect. His blog post is here and a longer paper is posted here.

The episode is an interesting lesson in how science can move quickly online—if not so much in formal academic journals—and how different ways of analyzing a given dataset can produce wildly different conclusions.

The original paper drew its data from the Adolescent Brain Cognitive Development Study, which began following its thousands of subjects in the mid-to-late 2010s, when the kids were 8 to 11 years old. Thanks to further data collection over the following two years, the authors can sort the kids not just according to their overall social media use, but according to how this use changed over time.

More than half of the sample used very little social media at any point. About 40 percent of the sample is placed in another group: those who started as light users but increased their use over time; the 12-year-olds in this group used social media for close to an hour a day. And the third group, constituting about 6 percent of the sample, comprised the heaviest users. Even nine-year-olds in this group used social media close to an hour a day, and the almost-teenagers used it more than three hours a day.

The authors check to see how these groups fared on cognitive tests at the study’s two-year follow-up. They include a variety of statistical controls, including the kids’ baseline cognitive performance—which should help to address issues of self-selection, where smarter or duller kids may be more likely to take up social media—as well as basic demographics and non-social media screen time.

These models answer the question: if two kids started out with the same cognitive scores, and also share numerous other traits available in the data, and yet they had two different trajectories of social media use, does the heavier social-media user tend to fare worse on the later cognitive test?

The study’s answer was yes. Across four different cognitive tests, there was always a measurable gap between the lightest and heaviest users. And for three of the four, there was also a gap between the lightest users and the medium group, the kids who’d started out light but increased over time. These are not enormous effects—generally falling between a tenth and a quarter of a standard deviation—but they are certainly worrisome given the ubiquity of screens for kids (including those at older ages than the study focused on).

Enter Crémieux. The blogger pointed out that the data in question facilitate a much more rigorous analysis than the authors had conducted.

Instead of comparing totally different kids with each other and relying on statistical adjustments to make that comparison more apples-to-apples, one could look at siblings—who share a family background and home environment—to see if the heavier-using siblings fared worse. One could also study outcomes from both survey waves, instead of focusing on performance at the two-year follow-up. One could even look within individuals, to see if kids’ own cognitive scores fluctuated along with their social-media use.

The results are underwhelming with these approaches, and yet still debatable. Most of Crémieux’s results are statistically insignificant, but two of the within-individual models still suggest negative effects.1 Lasker’s paper suggests that, since these effects weren’t evident in the family-based analysis, they might reflect “family-level time trends” instead of a real effect.

The debate over the impact of kids’ tech use—on cognitive skills and everything else—is far from over. And it’s too bad that even the fanciest statistical tools can’t unambiguously tell us what’s happening.

Share

From the Manhattan Institute

Other Work of Note

1

It’s tempting to think that even these results are far smaller than the original study’s, but note that the new models calculate the effect of one hour of social-media use rather than the difference between the high and low trajectories.

Share

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

New York Is Holding Back American AI // Albany is sabotaging innovation, with national and geopolitical consequences.

1 Share
  • Legislative Overreach: New York State has introduced over 180 AI-related bills in early 2026, creating a dense regulatory environment that threatens to stifle technological innovation.
  • Economic Risks: The state's aggressive regulatory stance and potential moratorium on data-center construction risk accelerating a business exodus, hindering U.S. competitiveness against China.
  • Regulatory Inefficiencies: Existing mandates, such as NYC’s algorithmic hiring audit requirements and union-focused government AI restrictions, serve as barriers to efficiency rather than fostering productive technological adoption.
  • Infrastructure Obstruction: Legal challenges and environmental demands are currently stalling key infrastructure projects, such as the Micron Technology chip-making facility, jeopardizing vital domestic manufacturing capacity.
  • National Implications: The state's policy decisions are increasingly influential, with local frameworks like the RAISE Act and algorithmic transparency laws potentially impacting broader national standards and discouraging innovation in other states.
  • Strategic Conflict: There is a marked disconnect between New York's restrictive local policy agenda and the national security imperative to maintain a technological lead over China in artificial intelligence.

The United States is engaged in a high-stakes, winner-take-all technological cold war with China. But the U.S. cannot lead in artificial intelligence if its largest economic zones function as anchors, not propellers.

Unfortunately, New York State is pushing for more innovation-killing regulation, spinning up red tape and mandating bureaucracy. Just three months into 2026, legislators in Albany have introduced more than 180 AI-related bills, far exceeding the quantity of legislation in any other state and even doubling that of California. There’s a bad idea for everything: national AI lab development rules, disparate-impact paperwork assessments and audits, algorithmic-pricing regulations, “robot taxes,” and AI rules in journalism and hiring.

Finally, a reason to check your email.

Sign up for our free newsletter today.

First Name*
Last Name*
Email*
Sign Up
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply.
Thank you for signing up!

New York’s everything-and-the-kitchen-sink approach to AI regulation will accelerate a continued business exodus. Consumers will suffer under the state’s micro-managerial, paperwork-first policies. And, because many regulations will have negative spillovers, the damage will extend beyond New York.

The harms aren’t hypothetical. Last year, Governor Kathy Hochul signed the Responsible Artificial Intelligence Safety and Education (RAISE) Act, which, after chapter amendments, regulates potential “catastrophic” risks associated with major AI systems. Governor Hochul boasted that the law sets the “national standard” for AI governance.

But lawmakers in Albany should not be dictating policy for the entire nation, especially on decisions of such magnitude. These issues are best addressed by national security experts with the necessary clearances and information to weigh risks and consequences.

To win the AI race, the United States needs to build more data centers. But New York is floating a three-year moratorium on data-center construction. By the time it elapses, the U.S. will have lost the AI race and forfeited its leadership position to Beijing.

Even when some New York lawmakers seek to boost AI capabilities, others erect new roadblocks. For example, the state promoted a $100 billion Micron Technology chip-making complex in Clay, New York. But opponents quickly filed lawsuits to halt construction until environmental and labor demands were satisfied, including two lawsuits filed on the day the project broke ground. Construction has already suffered serious delays, and the project could be derailed altogether.

It’s not just state lawmakers obstructing AI progress. In 2023, New York City enacted a first-in-the-nation law requiring race- and gender-bias audits of algorithmic hiring and firing tools. After a Cornell study found that only 18 of 391 city employers posted the audits as required, the Society for Human Resource Management (SHRM) declared the law a bust. But the measure had already inspired the sweeping Colorado AI Act, a law that that state’s government has regretted ever since.

Then, in 2024, Governor Hochul signed the Legislative Oversight of Automated Decision-making in Government (LOADinG) Act. The bill was sold to the public as a way of ensuring ethical, transparent, and accountable government AI use. In reality, it baked in layers of union protectionism, paperwork, and impact assessments.

The law stipulates, for example, that automated decision-making systems cannot result in a “transfer of future duties and functions ordinarily performed by employees of the state or any agency or public authority.” Such language is not written to help public employees make effective use of AI. It’s designed to protect their jobs at taxpayer expense.

New York State’s approach to innovation is at odds with the views of the state’s congressional representatives, who recognize the dangers of falling behind China on AI. Last year, Senator Chuck Schumer described the announcement of China’s powerful DeepSeek AI model as “AI’s Sputnik moment for America.” It was a “wakeup call that Congress desperately needs” to get serious about the AI race.

“If America falls behind China on AI, we will fall behind everywhere: economically, militarily, scientifically, educationally, everywhere,” Schumer said.

Schumer is right, and his state needs to listen. America cannot afford to let New York sabotage both itself and the nation’s innovators. This is how you lose a technology race.

Logan Kolas is the director of technology policy at the American Consumer Institute. Adam Thierer is a senior fellow for technology and innovation at the R Street Institute.

Photo: Will Waldron/Albany Times Union via Getty Images

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories