Strategic Initiatives
12247 stories
·
45 followers

China Bans Meta’s Acquisition of Manus on National Security Grounds - WSJ

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Regulation Order: china national development and reform commission mandates the reversal of the meta acquisition of manus ai.
  • Security Grounds: government officials cite national security concerns as the primary justification for banning the technology deal.
  • Startup Origin: the ai firm was initially developed by engineers at a chinese entity known as beijing butterfly effect technology.
  • Operational Relocation: the startup moved the majority of its staff to singapore following venture capital investment prior to the meta purchase.
  • Corporate Acquisition: meta completed the acquisition process in late december before facing immediate regulatory scrutiny in january.
  • Executive Restrictions: national authorities prevented manus cofounders residing in beijing from leaving the country during the investigative process.
  • Policy Objectives: regulatory intervention aims to suppress business departures and strictly control the export of domestic artificial intelligence assets.
  • Ongoing Dispute: affected parties anticipate further resolution processes while working to address the status of remaining domestic business operations.


A laptop screen displaying the Manus AI logo and "The general AI agent" text.Manus has developed an AI agent that can carry out tasks such as writing in-depth research reports. Raul Ariano/Bloomberg News

China has ordered that Meta Platforms’ META 2.41%increase; green up pointing triangle $2.5 billion acquisition of artificial-intelligence startup Manus be unwound.

China’s National Development and Reform Commission, which has the authority to review foreign investments, said Monday that it has banned the acquisition and ordered it to be rescinded on national security grounds.

“The transaction complied fully with applicable law,” a Meta representative said in an email statement. “We anticipate an appropriate resolution to the inquiry.”

Manus has developed an AI agent that can carry out sophisticated tasks such as writing in-depth research reports and preparing presentation slides. Early versions of Manus were created by engineers at Beijing Butterfly Effect Technology, which was founded in China in 2022. 

Last year, a Singapore-based entity, also called Butterfly Effect, took over the operations of the AI agent product in markets outside China. Manus later relocated most of its China-based employees to Singapore after receiving investment from a California-based venture-capital firm.

Then, in late December, Meta acquired Manus. Days later, Chinese authorities announced a review of the deal in January, saying that cross-border acquisitions and the export of technology must comply with the law. 

Last month, Chinese officials called the two co-founders of Manus—Xiao Hong and Ji Yichao—in to discuss the deal in Beijing. The executives, who currently work for Meta, were later told not to leave the country during the investigation, The Wall Street Journal has reported. Many of Manus’s top executives are Chinese nationals.

According to Chinese law, any foreign investments that may carry a national-security risk may be subject to review by the authorities. Chinese authorities believe they have the power to demand that the deal be unwound because Beijing Butterfly Effect Technology remains a Chinese company.

After the acquisition, Meta said that there would be no continuing Chinese ownership interest in Manus and that the startup would discontinue its China-based services and operations. Manus was working to close Beijing Butterfly Effect Technology, but hasn’t yet done so.

The series of events that culminated in Meta’s purchase of China-originated technology has angered Chinese regulators. They are worried that the steps Manus took would spur other Chinese companies to follow suit and move out of China without Beijing’s approval. 

The inquiry into the Manus sale reflected Beijing’s broader attempts to protect the country’s AI know-how at a time when the U.S. and China are locked in an intensifying technology race. Both countries have tightened export controls, restricted the exchange of tech professionals and curbed cross-border investment.

Manus and Xiao didn’t immediately respond to requests for comment. Ji couldn’t be reached for comment.

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Raffaele Huang is a reporter for The Wall Street Journal in Singapore, covering Asia’s technology companies. Previously, he mainly focused on corporate news in the automotive and technology sector in Beijing.


Up Next


Videos

Read the whole story
bogorad
32 minutes ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Only antimatter provides the energy we need for interstellar travel - Big Think

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Interstellar Challenges: reaching distant star systems requires overcoming immense obstacles regarding distance time speed and propulsion efficiency.
  • Energy Limits: current chemical rockets are inefficient while nuclear fission and fusion offer only marginal improvements in energy conversion.
  • Antimatter Potential: matter antimatter annihilation represents the most efficient possible fuel source by converting total mass into energy.
  • Production Hurdles: existing laboratory methods for creating antimatter have only produced minute quantities far below the requirements for spaceflight.
  • Storage Requirements: maintaining stable antimatter containers is technically demanding and current traps are insufficient for holding large amounts.
  • Thrust Generation: redirecting high energy gamma ray photons via grazing angle mirrors is essential to convert annihilation into useful acceleration.
  • Spacecraft Strategy: practical interstellar travel necessitates continuous acceleration at constant rates followed by symmetrical deceleration to reach destinations within human lifespans.
  • Propellant Mass: matter antimatter technology eliminates the crushing weight of traditional chemical or fusion fuel by utilizing maximum possible exhaust velocity.

As humanity basks in the aftermath of the unprecedented success of Artemis II, which took humans back to the Moon for the first time in 54 years and brought them farther from Earth than ever before, many of us can’t help but think about grander goals. As a species, we don’t just dream of returning to the Moon, but of heading to places we’ve never been: other planets, other star systems, or even other galaxies. However, there are big problems we have to solve if we ever want to send humans outside of the Solar System: the problems of distance, time, speed, and fuel efficiency.

Interstellar distances are huge, even compared to the vast interplanetary distances we encounter in the Solar System. With current rocket technology, it would take hundreds of human lifetimes to reach even the nearest star, and that’s because we’re limited by speed, which is in turn limited by the efficiency of our fuel sources. Chemical-based rockets leverage quite efficient fuel sources, like liquid oxygen and liquid hydrogen, but transform less than a millionth of the fuel’s rest mass into energy. If we went to nuclear fission-powered propulsion, we could transform about a thousandth of our fuel source’s rest mass (0.1%) into energy, while nuclear fusion-powered propulsion could convert up to almost a hundredth (0.7%) of its fuel source’s rest mass into energy.

But the ultimate fuel source would be a matter-antimatter annihilation: It’s 100% efficient. Barring the discovery of some new law of physics, only antimatter provides the power we’d need for realistic interstellar travel.

A rocket launches at night, emitting bright flames and smoke, its reflection visible in the water below—a powerful symbol of hope as Artemis II sets out on its journey from Earth to the moon.
The launch of Apollo 17, the ninth and final crewed mission to the Moon as part of the Apollo program, was also the first nighttime liftoff of a Saturn V rocket, occurring on December 7, 1972. The Saturn V remains the heaviest launch vehicle in history, capable of carrying the greatest masses to low-Earth orbit of any rocket ever.
Credit: NASA

Three main challenges arise in the endeavor to use antimatter as rocket fuel, all of which must be overcome if we actually want to have humans embark on an interstellar journey.

  1. The creation of antimatter. We know how to do this one in laboratory settings, and although it does require much more energy to make the antimatter than we eventually release from its annihilation, that’s not really a problem. What is a problem is that we would have to make antimatter in large amounts. If you add up all the antimatter ever made in all the labs in the history of Earth, you end up with just about a microgram’s worth of antimatter. We’d need many millions of times more to power an interstellar journey.
  2. The storage of antimatter. The very thing that makes antimatter such a fantastic fuel source — its propensity for annihilating with any normal matter that it contacts — makes it a liability for use as a fuel source. Somehow, we have to store this antimatter in a safe, stable way, and then transport it into a place where it undergoes a controlled annihilation with an equal-and-opposite amount of normal matter.
  3. The usability of energy derived from matter-antimatter annihilation. Assuming we can overcome these first two problems, we then have to turn that energy of annihilation into useful thrust: ideally by shunting the post-annihilation particles in the opposite direction we want the spacecraft to accelerate.

Let’s look at these problems a little more in depth, one at a time.

Two glowing spheres, one red and one green, face each other in space with a wavy line of light—like a particle physics collision—connecting them against a speckled dark background reminiscent of the last collider’s discoveries.
Whether two particles collide inside an accelerator or in the depths of space is irrelevant; all that matters is that we can detect the debris of what comes out, including newly created “daughter” particles. Although the flux of high-energy particles in space is lower, the achievable energies are far greater than in terrestrial laboratories. Under high-energy conditions, new particle-antiparticle pairs can be generated during these events, including both fundamental (quark, lepton) particles and composite (baryon, meson) ones.
Credit: flashmovie / Adobe Stock

We know how to create antimatter in labs; you simply smash particles together at high energies. For example, the easiest way to make an antiproton is to smash two protons together at high-enough energies so that, after the collision, there’s enough “extra” energy, via Einstein’s E=mc², to make an extra proton-antiproton pair. Protons are a dime-a-dozen, so we don’t really care about keeping them — the antiprotons are the antimatter that we’re after. Therefore, we design electric and magnetic fields to bend and confine them so that they don’t annihilate away with the first proton they encounter.

The Large Hadron Collider (LHC) at CERN is the location of the highest-energy proton-proton collisions, but we’re better off using lower-energy collisions to create antiprotons. The reason is that if we create an antiproton from a high-energy collision, the antiproton will have a lot of kinetic energy, making it difficult to control, confine, or keep from running into something that contains a proton. In fact, the record-setting collider prior to the LHC, the Tevatron at Fermilab, used to make antiproton beams that would collide with protons in their accelerator, leading to incredible fundamental discoveries in the late 20th century, including the top quark.

Just as an atom is a positively charged, massive nucleus orbited by one or more electrons, antiatoms simply flip all of the constituent matter particles for their antimatter counterparts, with positron(s) orbiting the negatively-charged antimatter nucleus. The same energetic possibilities exist for antimatter as matter. First hypothesized in 1928/9 by Dirac, antimatter (in the form of positrons) was first detected in the lab only a few years later: in 1932.
Credit: Katie Bertsche/Lawrence Berkeley Lab

We know how to store antimatter in labs as well. Electric and magnetic fields are used to bend charged particles, and if you know the mass, charge, and kinetic energy of the antimatter you’ll be creating, you can leverage those fields to store antimatter indefinitely. First, you slow the particles down, and then you create a near-perfect vacuum inside a cavity: a region devoid of normal matter except for the container walls. You then set up a series of quadrupole electric fields and uniformly homogeneous magnetic fields inside the cavity to confine particles of a uniform mass and charge within it: a Penning trap.

However, Penning traps are really only useful if you have small numbers of particles to confine. If you have too many particles, the mutual repulsion from having so many like charges together will “push” many of those charges into the walls of the container anyway, causing you to lose the antimatter particles you worked so hard to create and confine. While it’s a tremendous achievement that we just successfully transported antimatter (on a truck!) for the first time, the fact is there were only 82 antiprotons inside the transported Penning trap. (The trap itself weighed over 1,000 kg, and yet it was the most compact large-scale Penning trap ever built.) For a kilogram of antimatter, there would be more than 1027antiprotons to contain: far too many to confine with a setup like this.

Workers wearing helmets supervise the loading of a large container—housing equipment for antimatter interstellar travel—into a truck using lifting gear inside an industrial facility.
On March 23, 2026, antimatter was successfully transported, via truck, without being destroyed or lost. A device known as a Penning trap, here being loaded onto a transport truck, contained nearly 100 antiprotons within it, all of which were successfully accounted for upon the truck’s completion of its journey. This also represents the smallest, lightest such Penning trap ever successfully operated.
Credit: CERN; Multimedia Production Team, MPT; Arnold, Melanie; Brice, Maximilien

We’re ultimately going to have to take a different approach for antimatter containment, but that may be possible thanks to an experiment conducted in the 2010s and 2020s: the ALPHA-g experiment. Its goal was to:

  • create large numbers of neutral antiatoms,
  • pin them so that they wouldn’t annihilate with the container walls,
  • release them so they could experience Earth’s gravitational field,
  • and measure whether antimatter fell down (as predicted), fell up (which would’ve enabled warp drive), or did something entirely different.

ALPHA-g was a success. It not only proved that antimatter gravitates the same way that normal matter does, but also demonstrated that antiatoms could be created, controlled, and stored, at least temporarily. To usefully apply this to space travel, we’d need to find a way to create large numbers of antiatoms and keep them in a compressed state, where they wouldn’t smash into container walls and could be stored long-term. 

Penning trap experiments (like BASE) and antiatom experiments (like ALPHA-g) represent significant progress toward the goal of antimatter storage, but we still have a long way to go to turn this science-fiction technology into science fact.

antimatter gravity
The ALPHA-g detector, built at Canada’s TRIUMF facility, was oriented vertically and filled with neutral antiatoms confined by electromagnetic fields. When the fields release, most antiatoms will randomly fly away, but a few that happen to be at rest will have the opportunity to move solely under the influence of gravity. If they would have fallen up, many speculations that had previously been constrained to the realm of science-fiction would have become plausible. As the experiment showed, however, antiatoms fall down in a gravitational field, killing our best hope for antigravity and warp drive technologies.
Credit: Stu Shepherd/TRIUMF

And then, once you create and store a large quantity of antimatter, you’ll need some way to bring tiny amounts of that antimatter, a little bit at a time, into an “engine” with an equal-and-opposite amount of normal matter. When the matter and antimatter collide together, they’ll annihilate each other into pure energy (in the form of photons) in the exact inverse of the way the antimatter was first created: via Einstein’s E=mc². The problem then becomes what to do with the high-energy gamma-ray photons that arise from annihilation. If you don’t build anything special, they’ll simply smash into the walls of your spacecraft, ionizing atoms and causing damage, rather than generating thrust.

The key to controlling the photons comes from a surprising source: astronomy. In astronomy, we don’t use a conventional mirror to observe high-energy gamma-rays and X-rays; they would either pass through or be absorbed by that type of matter. Instead, we build a cavity filled with mirrors set at very shallow angles (what we call “grazing angles“), which can then be used to control where those high-energy photons wind up. By arranging a series of mirrors of the right material in this fashion, we could ensure that the photons created by matter-antimatter annihilation get shunted out of the rear of the spacecraft, providing thrust in the opposite (forward) direction as part of the equal-and-opposite reaction demanded by physics.

Cross-sectional diagram of X-ray telescope mirrors shows 4 nested parabolloids and hyperboloids focusing X-rays onto a focal surface, with labeled dimensions and viewing angle—an essential tool for antimatter interstellar travel research.
When very high-energy photons are produced, such as X-rays or gamma-rays, they cannot be focused by a conventional mirror; they would simply ionize the electrons present in the material, be absorbed by them, or pass right through them. Instead, if you wish to focus or redirect them, you must set up an array of grazing mirrors to reflect and/or focus that light in the desired direction. This setup shows the High Resolution Mirror Assembly aboard NASA’s Chandra X-ray Observatory, and it could be applied to a matter-antimatter annihilation chamber aboard a spaceship.
Credit: NASA/CXC/D.Berry

This, at last, converts your matter-antimatter annihilation into thrust: again, with up to 100% efficiency. Efficiency, in this case, means that 100% of your fuel (where 50% is matter and 50% is antimatter) gets converted into useful energy, which can be used to generate momentum-changing thrust and accelerate your spacecraft. This leads to a grand plan for reaching another star system:

  • use a big, heavy-lift conventional rocket to place a capsule with your crew, plus the matter, antimatter, and annihilation engine, into Earth’s orbit (make multiple trips if you need to),
  • then begin steadily accelerating toward your destination until you’re moving at your desired speed,
  • then coast,
  • then, at the halfway point, turn the ship to the opposite orientation,
  • then coast again until it’s time to begin decelerating,
  • and then steadily decelerate at the same rate you accelerated at earlier until you arrive at your destination.

If you can accelerate at 1 g (the gravitational acceleration experienced on Earth, 9.8 m/s²) for the first half of the trip, and then turn the ship around and decelerate at the same rate for the second half of the trip, it’s straightforward to calculate the amount of time it would take to reach a nearby star. Instead of tens or hundreds of thousands of years, the trip could be done in mere decades: short enough for a human crew to still be alive when they arrive at their destination. (Albeit, with no return trip possible.)

Accelerate twin round trips, exploring relativity.
If you were to get in a spaceship and accelerate at 1g (Earth’s acceleration) for the entirety of the trip, you could travel at nearly the speed of light after only a few years of acceleration. As you increased your speed ever closer to the speed of light, the effects of time dilation would get progressively more severe. In theory, distances of many light-years could be traversed in experienced times of much less than a year, but one must reckon with the weight of realistic fuel sources as well.
Credit: P. Fraundorf/Wikimedia Commons

However, there’s an enormous problem even if you did everything we mentioned up to this point:

  • generate all the needed antimatter,
  • develop antimatter storage,
  • develop and build a suitable matter-antimatter annihilation chamber,
  • and use a conventional launch vehicle to lift the payload and the fuel into Earth’s orbit.

The problem is this: the sheer amount of fuel that you’d need to make an interstellar journey happen. Let’s imagine we have a small payload of only 500 kg (1,102 pounds) including the astronauts and all of their food, water, and supplies. If you want to accelerate it at 1 g, you’d only need a small amount of matter-and-antimatter to annihilate: 16 milligrams worth, or 8 milligrams of matter with 8 milligrams of antimatter.

However, that will only get you one second’s worth of acceleration! If you want to accelerate for longer, you’ll need more fuel. With one gram of matter and one gram of antimatter, you can get two full minutes of acceleration. With 300 grams of matter and 300 grams of antimatter, you can get 10 hours of acceleration, which would get you up to speeds of 368 km/s — more than 82,000 miles per hour. That’s twice as fast as the maximum speed of the Parker Solar Probe, which, to date, is humanity’s fastest spacecraft ever flown.

A spacecraft travels at the fastest spacecraft speed record through bright, yellow-orange streaks of plasma and solar wind near the Sun.
This illustration shows the Parker Solar Probe approaching perihelion: its closest approach to the Sun. It achieved its closest approach ever on December 24, 2024, coming within just 4.43 solar diameters of the Sun’s photosphere. It became the fastest spacecraft ever created by humanity during that, and subsequent, perihelion passes.
Credit: NASA’s Goddard Space Flight Center/Scientific Visualization Studio

But if you want to reach even the nearest stars in a reasonable amount of time, you need to travel much faster. Remember, stellar distances are measured in light-years, with the closest star system still over four light-years away. If you want to get there in a couple of decades, you need to accelerate until you reach more than 20% the speed of light, which is more difficult than you might initially think.

Sure, you can do the math and calculate that, in order to reach 20% the speed of light (roughly 60,000 km/s), you’d need about 50 kg worth of antimatter and 50 kg worth of matter as fuel. It would take you approximately 10 weeks of constant acceleration to reach that speed. Then, you’d journey for approximately 20–25 years (the amount of time it would take you to traverse around 4 light-years at that speed), with the astronauts keeping themselves alive with that tiny payload inside. You’d then need to spend another 50 kg worth of antimatter and another 50 kg worth of matter in matter-antimatter annihilation to decelerate, finally coming to rest in the Proxima/Alpha Centauri system if you did your calculations correctly.

Only with maximum (~100%) fuel efficiency, which you can only achieve with matter-antimatter annihilation, do you avoid the catastrophic problem associated with all other propulsion strategies: the need to bring enormous amounts of fuel on board the spacecraft.

A diagram showing the different types of annihilation.
Whenever you collide a particle with its antiparticle, it can annihilate away into pure energy. This means if you collide any two particles at all with enough energy, you can create a matter-antimatter pair. But if the Universe is below a certain energy threshold, you can only annihilate, not create. The pathway to using antimatter as fuel in space involves producing it in copious quantities here on Earth, storing it, and then annihilating it with matter in a controlled fashion inside a spaceship’s reaction engine.
Credit: Andrew Deniszczyc/revise.im

This is the best that the conservation of energy will permit for a spacecraft, at least, given the limits of the laws of physics as we currently understand them. The next-best option to matter-antimatter annihilation, which is 100% efficient at converting matter into energy, is nuclear fusion as it works in the Sun — and that only converts 0.7% of its initial rest-mass into usable energy. Instead of needing 100 kg of antimatter to accelerate a 500 kg payload, you’d need at least thousands of tons (millions of kilograms) of hydrogen, plus a constantly-working fusion reactor, and you’d have to extract and jettison the spent fuel (helium) along the way. Remember: You don’t just need to accelerate/decelerate the mass that’s going to be arriving at your ultimate destination. You need to accelerate/decelerate your unspent fuel at every stage along your journey.

For all methods besides matter-antimatter annihilation, you’re inevitably wasting more than 99% of your mass, meaning that all of that “fuel” just sits there as useless heavy mass that you need to transport during your journey. Matter-antimatter annihilation is the only option for interstellar travel where most of what you’re bringing with you is payload, rather than propellant, and the effective exhaust velocity (because it’s purely in the form of photons) is the maximum possible: the speed of light.

The advances required will be substantial, but of all the fuel sources that we know of, only antimatter, ultimately, is capable of powering our dreams of becoming an interstellar civilization.

Read the whole story
bogorad
32 minutes ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

How Scientists Changed Their View of Insomnia | RealClearScience

1 Share
  • historical context: insomnia has affected humanity since ancient times although recent scientific insights provided significant advancements in understanding chronic sleep deprivation.
  • prevalence rates: nearly one third of the adult population in england currently reports frequent symptoms which qualifies insomnia as a widespread psychological concern.
  • independent condition: researchers shifted away from the secondary insomnia model and now recognise the condition as an independent disorder requiring its own targeted treatment.
  • health relationships: addressing sleep patterns can frequently lead to measurable improvements in coexisting physical and mental health issues like chronic pain depression and anxiety.
  • vulnerable groups: demographic factors such as being female older age and lower socio economic status correlate with higher risks of experiencing sleep disruption.
  • behavioral management: experts recommend avoiding prolonged time in bed while awake to prevent the brain from creating negative associations with sleep environments.
  • treatment efficacy: cognitive behavioural therapy for insomnia remains a primary recommendation for successful symptom management despite limited availability in traditional clinical settings.
  • medication risks: reliance on traditional sleeping tablets is discouraged due to long term side effects while newer pharmaceutical options still lack comprehensive long term safety data.

Insomnia may have been torturing humanity since ancient times, but over the last 20 years scientists have made progress in their understanding of chronic sleep deprivation.

Today, sleep deprivation is one of the most widespread reported psychological problems in Britain, with about a third of the adult population in England reporting frequent insomnia symptoms.

Insomnia rarely occurs on its own, which brings us to one of the biggest changes scientists have made in our understanding of chronic sleep deprivation. The vast majority of people with insomnia often have other mental and physical health conditions, like diabetes, hypertension, chronic pain, thyroid disease, gastrointestinal problems, anxiety or depression.

In its diagnostic history, insomnia coupled with another illness or disorder was called secondary insomnia. That meant that insomnia was considered a consequence of those other underlying conditions. As such, until fairly recently clinicians did not generally attempt to treat secondary insomnia.

But in the early 2000s, both research and clinical practice evidence started to indicate that this approach was wrong. Scientists argued that insomnia could precede or long survive a primary condition. Abandoning this distinction between primary and secondary insomnia was a major advance in acknowledging that insomnia frequently was an independent disorder, requiring its own treatment.

What’s more, researchers have been accumulating strong evidence that helping people with their sleeping problems could actually lead to improvements in their other health conditions. Chronic pain, chronic heart failure, depression, psychosis, alcohol dependency, bipolar disorder, PTSD, can all improve for patients if they address their sleeping problems.

Who gets insomnia?

Over the past two decades, we have acquired more rigorous and international data illustrating how ubiquitous insomnia is. Insomnia affects almost everyone, though women, older people, and people of lower socio-economic status are more vulnerable to it.

These groups experience a combination of biological, psychological and social risk factors that expose them to long-term sleep-disruption. For example, women often experience acute hormone fluctuations, pregnancy and birth, breastfeeding, menopause, domestic violence, caregiving roles, higher prevalence of depression and anxiety – all of which can lead to more opportunities for prolonged sleep disruption.

Some current issues in insomnia research include the need to understand different types of insomnia symptoms, and their relationship to health and performance risks. For example, there is evidence that difficulty initiating sleep (as opposed to difficulty staying asleep, or waking up too early in the morning) is associated with an increased risk of depression. Similarly, scientists still have questions on changes in things like brain activity, heart rate, or stress hormones that accompany insomnia. In common with all other mental health disorders, we are still yet to find biomarkers of insomnia.

However, research has helped us understand some things people can do to prevent insonmia episodes progressing to chronic insomnia, which is harder to treat. When insomnia symptoms happen more nights than not, and last for more than three months, then a diagnosis of insomnia disorder, or chronic insomnia, can be made.

Plasticine sheep jumping among clouds Insomnia keeping you up? Lizavetta/Shutterstock

One of the most common and harmful habits that develop during periods of insomnia is lying in bed, trying to sleep. Scientists have learned that lying in bed awake leads to perpetual cognitive arousal and, in time, it teaches your brain to stop connecting bed and being asleep.

Thus, if you cannot sleep at night, get up and do something else absorbing, but calming – read, write a list for the following day, listen to calming music or do some breathing exercises. When you feel sleepy again, get back to bed. If you are tired the following day, a well-placed short nap is fine, in the afternoon, for a maximum of 20 minutes. However, one must be careful with daytime sleeping, as it may reduce sleepiness at nighttime, and going to sleep may become even more difficult.

For those who do struggle with insomnia, there are effective treatments recommended. The story of the profound changes from secondary insomnia to insomnia disorder speaks of the power of clinical diagnosis in providing a pathway to treatment.

Cognitive behavioural treatment for insomnia (CBTI) is a package of techniques designed to maximise sleepiness at bedtime. It involves structured steps which aim to modify behaviour and mental activity. There are some predictors of treatment success: shorter duration of insomnia symptoms (years, rather than decades), less depression or pain and more positive expectations towards CBTI. But CBTI is broadly effective across all groups of people with insomnia.

Even so, only a tiny proportion of people reporting insomnia symptoms seek medical help. People may consider insomnia symptoms trivial or manageable, or they may be unaware of the options. It may also be due to the unavailability of treatment options. CBTI remains largely unavailable in clinical practice, mainly due to clinicians’ unfamiliarity with the treatment programme, and limited funding.

This pushes patients towards sleeping tablets, which are not an acceptable long-term solution. Sleeping tablets are associated with significant cognitive and motor impairment, increased risk of falls, dependence, tolerance and withdrawal symptoms, daytime lethargy, dizziness and headaches.

The main truly “new” class of sleeping pills are the dual orexin receptor antagonists (DORAs), which have shown a safety profile in many ways better than the traditional sedatives, especially around dependence concerns. But DORAs are not risk free or “mild” pills. They are relatively new to the market, first approved in the UK in 2022. So we lack long-term data to assess their safety for long-term use in people with insomnia.

A decent alternative is online self-delivered CBTI, on platforms such as Sleepful, which are free to access.

We have made great strides in sleep medicine over the past 20 years for people with insomnia, we just need to realise the potential of such profound changes by providing the right help for those suffering with it.The Conversation

Iuliana Hartescu, Senior Lecturer in Psychology, Loughborough University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

What I Learned From Teaching Darwin | RealClearScience

1 Share
  • Course structure: a graduate seminar in twenty twenty five examined darwinian thought through foundational texts and critical discussion
  • Darwinian methodology: scientific inquiry in the nineteenth century relied on descriptive prose and observation rather than modern statistical formalisms
  • Academic climate: higher education benefits from confronting difficult and offensive historical materials with rigor rather than avoidance
  • Human complexity: primary source study reveals a tension between scientific brilliance and the presence of harmful social hierarchies regarding race and gender
  • Science education: training future scientists requires developing judgment to navigate historical prejudice while maintaining intellectual integrity
  • Current trends: the modern scientific landscape emphasizes career metrics and narrow specialization over the pursuit of broad knowledge
  • Interdisciplinary vision: the pursuit of truth requires crossing disciplinary boundaries to address massive societal challenges like artificial intelligence and climate change
  • Professional incentives: current academic structures prioritize publication volume and reputation over the fundamental relationship between humans and the natural world

During the fall semester of 2025, I taught a graduate seminar entitled “Darwinian Thought and Society.” While teaching should always derive from generosity and a desire to share knowledge, my motivations were partly selfish. The course was an opportunity for me to re-engage with Darwin’s foundational ideas in the company of some of the brightest junior scientists that I’ve ever come across.

The conversations around the first book we read, Darwin’s 1859 “On the Origin of Species,” were mostly familiar ones. We often discussed his use of evidence and his voluminous knowledge of natural history. But other features stood out. For example, the manner in which the book delivered its theoretical argument is unlike what scientific opuses do today. The book contains no equations and only a single figure, a diagram often described as the world’s first phylogenetic tree. It contains no detailed experimental design. There are no statistical methods, or power calculations. Yet the ideas in it are among the most radical and dangerous in the Western canon. In 2026, science has been forced into a deep reflection phase with regard to its present and its future. Revisiting Darwin offers useful lessons for the terrain we now occupy.

One of the revelations from the course was something that I already knew but had not fully absorbed: that the most compelling aspect of Darwin’s work is the manner in which he arranged, staged, and presented the evidence. Darwin did not persuade with numbers, strict axioms, or formalisms. He built worlds for the reader to inhabit, such that our doubts collapse under the steady pressure of his observations, stitched together through turns-of-phrase. In this sense, “On the Origin of Species” — arguably the most important scientific treatise since Isaac Newton’s “The Mathematical Principles of Natural Philosophy” — can be responsibly described as a work of prose and science communication. And his broader corpus is a gift because it encourages conversation rather than fruitless provocation and allows us to discuss, with clarity, even his most controversial ideas.

Science has been forced into a deep reflection phase with regard to its present and its future. Revisiting Darwin offers useful lessons for the terrain we now occupy.

On the first day of class, I joked with students that I would play the role of their politically conservative uncle. That is, there would be no trigger warnings and none of the cushioning that has become standard in college courses that include exposure to ideas and readings with offensive language or content. I told them that we would read Darwin’s books as they were written and try to understand them, and if they didn’t like that, to enroll in a different course. The larger lesson was simple: To study a complex world, you must read difficult material and learn to interpret it with rigor and empathy.

I was priming the class for Darwin’s views on race and gender, ideas that complicate many of our largely positive opinions of him (mine included). Some of my selective memory, which demotes his problematic takes, has support: There is a literature on how progressive he was compared to scientists like his cousin Francis Galton, who coined the term “eugenics” in 1883. But reading Darwin’s 1871 book “The Descent of Man” in a classroom with several young women from around the world softened my rigid stance that the right response to backward takes is to simply get over them. I still believe that refusing to read or interpret such work is unscholarly. But I also came to admit something I had been too eager to brush aside: Even when we consider historical context, there is still something painful about reading a giant of science describe human differences in the language of hierarchy, rank, and levels of civilization.

More generally, the experience taught me that science education is not merely the transfer of correct ideas from one generation to the next. It is also an apprenticeship in judgment. It is where students learn how to handle brilliance contaminated by prejudice, how to read carefully without worship, and how to separate historical importance from moral innocence. We do not protect science by pretending its heroes were unassailable. We protect it by showing that truth-seeking is a long walk on rugged terrain.

In casual conversations, groups of friends and I have conducted the thought experiment of what Darwin would be like if he were around today. Usually, our results highlight some of his peculiarities: He was a naturalist but only received formal education in medicine (studies that he did not complete) and theology. He was wealthy and well connected, but was not an exceptional student, and famously lacked the mathematical proclivities of some of his contemporaries. But there is a more provocative dimension to the thought-exercise. I contend that 2026’s Darwin would carry the same savant-like obsession with nature but would apply it to the most complex social questions of today. His interests would surely transcend natural history and theories of evolution. He would care about the misinformation crisis, climate science, and have opinions about how to live in a world being upended by artificial intelligence and threats to democracy. He did not care much for disciplinary boundaries in his own time, and he would not care for them now. His computer desktop would have dozens of folders, some with machine-learning papers, others full of ornithology monographs. And he’d read them all.

 What I’m offering may sound obvious to some, but the multiplicity that I discovered in Darwin is far from what I see encouraged in the scientists of today. In my view, scientific progress can be summarized as a race for prizes and the ability to gamify metrics, just as much as any other ambition. We build reputations in idea spaces often dominated by factions. We are penalized for pivoting into new areas of inquiry. We flood the market with ideas, hoping one of them catches fire. Surely, a love for the natural world lives in there somewhere, but the incentive structure has turned many of us into human large-language models, where we sometimes do not care about what we are measuring or why, as long as it lands in the pages of a prestigious journal. In this respect at least, I can’t help but lament at how far we have fallen from the days of Darwin, even if science has improved in many aspects (for example, it is far more diverse and inclusive than in his day). For Darwin at least, science was a conversation between humans and nature, more than it was an industry. I wish that were true today, for all of us.

Because I offer critiques of the science status quo, I am often characterized as something of a cowboy (charitably) or a gadfly (a label I resent). But what I aspire to be, more than anything, is an intellectual child of Charles Darwin. By this I do not mean a disciple of every belief he held, nor a romantic devotee of his era. I mean someone who believes that science is bigger than its rituals, that it should be hospitable to unusual minds, and that difficult truths must be faced directly.

This article was originally published on Undark. Read the original article.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The Case Against Social Media "Addiction" | Cato at Liberty Blog

2 Shares
  • Legal Trends: courts and lawmakers increasingly categorize heavy social media use as addiction despite lacking scientific consensus.
  • Legislative Actions: various bills and regional bans seek to restrict minor access and mandate design changes on digital platforms.
  • Definitional Confusion: public policy frequently conflates distinct concepts like physiological dependence and behavioral habits with clinical addiction.
  • Diagnostic Limits: addiction requires compulsive engagement despite harmful consequences whereas many online behaviors remain simple routines or habits.
  • Scientific Flaws: existing research relies on fragmented data and correlational studies that fail to establish direct causation between social media and mental health.
  • Financial Incentives: recognition of a new clinical disorder expands insurance billing opportunities and fuels a growing rehabilitation industry.
  • Institutional Risks: history shows that subjective diagnostic criteria lead to overdiagnosis and inflated medical spending for ordinary human behaviors.
  • Policy Consequences: premature medicalization threatens individual privacy and free speech while inviting excessive government oversight of digital platforms.

social media

A movement is underway to classify heavy social media use as a form of addiction. Advocacy groups, plaintiffs’ attorneys, and a growing number of lawmakers are treating the proposition as settled science. In a landmark California trial in early 2026, a jury found Meta and Google negligent for designing platforms that allegedly caused mental health harm, awarding $6 million in damages. 

In Congress, the Kids Online Safety Act, a bill that would require social media platforms to prevent specified harms to minors, including “compulsive usage,” and to disable addictive product features by default, has advanced out of committee in both chambers. Additional bills would ban children under age 16 from using social media entirely, require age verification at the app store level, and expand the Children’s Online Privacy Protection Act to cover minors up to age 17. Australia has enacted an outright ban on social media for children under 16 years old.

The pace of legal and legislative action suggests a society that has made up its mind. But the underlying science has not. The research base on social media addiction remains fragmented, methodologically inconsistent, and far from the kind of consensus that would ordinarily justify the regulatory and legal apparatus now being built around it. As we have argued in our earlier analysis of how the American health care system rewards psychiatric overdiagnosis, when diagnosis is subjective and payment depends on diagnosis, the system will predictably expand the boundaries of illness. Social media addiction is poised to become the next case study in that dynamic, with consequences that extend well beyond health care spending into the domains of free speech, privacy, and innovation.

Addiction, Dependence, and Habit Are Not the Same Thing

Before asking whether social media is addictive, it is worth clarifying what addiction actually means, because politicians, journalists, and even some clinicians routinely misuse the term. As one of us has previously argued, the conflation of addiction with dependence and habit distorts both public understanding and public policy.

These three concepts describe very different things. Dependence is a physiological adaptation in which abruptly stopping a substance produces withdrawal symptoms. It is common and, by itself, unremarkable. Anyone who has ever quit coffee cold turkey and spent the next two days with a splitting headache has experienced caffeine dependence. Patients who take certain antidepressants, benzodiazepines (e.g., Valium), antiepileptic drugs, or beta blockers for extended periods develop physical dependence as well. If they stop abruptly, they will experience withdrawal. In some cases, withdrawal can be fatal. Yet no one would claim that patients taking beta blockers long term for high blood pressure or antiepileptic medications for a seizure disorder are addicted to those drugs.

A habit is something different still. Habits are behavioral patterns, often automatic, that people repeat because they find them pleasurable, comforting, or simply routine. Checking social media first thing in the morning, scrolling through a feed while waiting in line, or reaching for the phone out of boredom: These are habits. They may be unwise. They may waste time. They may even be difficult to break. But difficulty is not pathology. People also find it hard to stop snacking, binge-watching television, or hitting the snooze button. We do not diagnose these behaviors as diseases.

Addiction is distinct from both. The American Society of Addiction Medicine’s definition states: “Addiction is a treatable, chronic medical disease involving complex interactions among brain circuits, genetics, the environment, and an individual’s life experiences. People with addiction use substances or engage in behaviors that become compulsive and often continue despite harmful consequences.” Addiction can involve substances or activities. For example, the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) classifies gambling disorder as an addiction. 

The defining feature is compulsive behavior: repeated engagement despite clear harm to relationships, finances, or health. This pattern reflects changes in the brain’s reward and decisionmaking circuits that erode self-control. But people with addiction do not completely surrender agency; they are not zombie-like automatons at the mercy of a substance or activity. Even in the grip of addiction, many will avoid use in certain settings, delay it when consequences are immediate, or respond to incentives—evidence that agency, though impaired, is still intact.

When the term “addiction” is applied loosely to heavy social media use, it skips over the crucial middle category of habit and confers a clinical gravity that the evidence does not support. Someone who checks Instagram too often or finds it hard to put down TikTok almost certainly has an unhealthy habit. Calling it an addiction equates that behavior with the compulsive, life-destroying patterns seen in substance use disorders.

This distinction matters because “addiction” is not a neutral word—it has specific consequences enshrined in policy. Once applied, it unlocks diagnosis codes, insurance payments, treatment industries, lawsuits, and regulation. We don’t create Medicaid billing categories for habits. We don’t pass federal laws to shield children from habits. Label something “addiction,” and the entire policy engine comes to life. That label should follow the science, not lead it.

What Does the Research Actually Show?

Much of the existing research on social media and mental health suffers from serious methodological limitations. The majority of studies rely on self-reported measures administered at a single point in time. They can identify correlations between heavy social media use and negative mental health outcomes, but they cannot establish the direction of causation. It is equally plausible that individuals already experiencing depression, anxiety, or social isolation turn to social media as a coping mechanism rather than social media causing those conditions.

2024 systematic review and meta-analysis published in JAMA Pediatrics analyzed 143 studies examining the effects of social media use on mental health among over one million adolescents worldwide. Overall associations were small and inconsistent across studies and often confounded by other factors such as personality and social support. The evidence base being cited to justify sweeping policy interventions is built on faulty scientific ground, as one of us recently argued in the Washington Post.

None of that means excessive social media use cannot be harmful for certain individuals. Some people undoubtedly experience significant distress and functional impairment related to their online habits. But the question of whether a behavior causes harm in some people is different from the question of whether it constitutes a discrete clinical disorder, and the latter question is far from resolved.

Why We Should Not Trust the Diagnostic Authorities to Get This Right

Proponents of recognizing social media addiction as a disorder often point to the DSM and the International Classification of Diseases (ICD) as the bodies that will eventually resolve the question. But formal inclusion in these manuals would not settle the science. It would settle the payment. And the track record of these manuals should inspire caution, not confidence.

Formal classification matters because of what it triggers financially. In the American health care system, a diagnosis unlocks reimbursement. A recognized social media addiction diagnosis would trigger insurance coverage under the Mental Health Parity and Addiction Equity Act, which requires health plans, including Medicaid managed care, to cover behavioral health services at parity with medical and surgical services. Under fee-for-service billing, providers increase revenue by increasing the volume of services delivered, and subjective diagnostic criteria provide the discretion to do so. The system fixes the price of a service but introduces no effective mechanism for governing whether the service was clinically necessary. 

The DSM has progressively broadened the boundaries of psychiatric illness over successive revisions, often without corresponding improvements in diagnostic precision. Its fifth edition collapsed previously distinct autism categories into a single spectrum elastic enough to encompass both nonverbal children requiring constant care and socially awkward adolescents who prefer solitude. It loosened ADHD criteria, allowing symptom onset as late as age 12 rather than requiring it by age 7, and reduced the symptom threshold for adults. Generalized anxiety disorder requires only that worry be “excessive” and cause “clinically significant distress or impairment,” judgments that depend entirely on a clinician’s interpretation of where normal worry ends and disorder begins. Each revision has expanded the population eligible for diagnosis and, with it, the population eligible for treatment and reimbursement.

As we documented in our recent analysis of Medicaid-funded autism therapy, the broadening of autism spectrum criteria, combined with Medicaid’s open-ended reimbursement structure, produced an explosion in spending on applied behavior analysis therapy that far outpaced any plausible change in the actual prevalence of disabling autism. The broadening of ADHD criteria produced a parallel surge in stimulant prescriptions. In each case, the combination of subjective diagnosis and financial incentives that reward diagnosis pushed the boundaries of illness outward.

Social media addiction, if formalized, will follow the same trajectory. A social media addiction rehabilitation industry is already emerging: Specialized retreats, counseling programs, and screen-time management apps are marketing themselves to anxious parents and burned-out professionals. That industry will expand dramatically the moment a formal diagnosis is established—a new special interest group, funded in significant part by taxpayer dollars and private insurance premiums. 

We have already seen this dynamic play out when unhealthy habits become medically pathologized. The inclusion of “gaming disorder” in the ICD-11 in 2019 is a cautionary example rather than a reassuring precedent. A large group of scholars published an open letter opposing the classification, warning that it rested on a low-quality evidence base, that the diagnostic criteria leaned too heavily on substance use and gambling frameworks without adequate validation for behavioral contexts, and that official classification would generate a “tsunami of false positive referrals to treatment.” The scholars were particularly concerned that premature classification would pathologize ordinary recreational activity and cause significant stigma. That gaming disorder made it into the ICD-11 despite these objections is not evidence that the process works. It is evidence that diagnostic classification is driven as much by political and institutional momentum as by scientific rigor. Anyone who believes that social media addiction will receive more careful treatment from these same institutions is not paying attention.

What Comes Next

This pattern is by now familiar. Subjective diagnostic criteria and financial incentives that reward diagnosis have repeatedly produced policy responses that were disproportionate, costly, and harmful. The social media addiction debate is following the same trajectory. If we do not insist on scientific rigor before enshrining a diagnosis in law and policy, we will once again find ourselves managing the consequences of a premature consensus.

The question is not whether social media can be used in unhealthy ways. Of course it can. The question is what happens when we reclassify those behaviors as a medical disorder before the science supports that classification. When diagnosis is subjective and incentives reward diagnosis, the boundaries of illness expand. More diagnoses generate more treatment, more spending, and more regulation—along with greater government intrusion into choices that were once considered matters of personal judgment.

That reclassification also invites litigation. Once courts accept the premise of “addiction,” lawsuits will pressure platforms to alter or restrict lawful content and design features, often through settlement agreements negotiated outside the legislative process. Lawmakers, responding to a perceived epidemic, will layer on additional restrictions—limiting access, mandating intrusive age verification, and expanding regulatory oversight of online speech. As our colleagues at Cato have argued, these interventions threaten free expression and innovation while doing little to address the underlying concerns about children’s welfare and risk undermining online speech and privacy for users of all ages. The costs will be measured not only in dollars but also in diminished speech, eroded privacy, and the further medicalization of ordinary human behavior.

When the science is unsettled and the incentives to expand diagnosis are strong, we must be cautious. At bottom, this is a question of restraint.

We should be especially careful before turning a widespread human behavior into a medical disorder, because once we do, the consequences will extend far beyond the people we are trying to help.

Read the whole story
cherjr
1 hour ago
reply
48.840867,2.324885
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Chernobyl Anniversary: Blame Communism, Not Nuclear Power

1 Share
  • Event Origins: a nuclear reactor explosion occurred at the chernobyl facility on april 26 1986 during a failed electrical safety test.
  • Systemic Flaws: the rbmk 1000 reactor design lacked standard containment structures and featured a positive void coefficient that caused uncontrollable power spikes.
  • Governmental Secrecy: state officials suppressed information about the disaster preventing public warnings and delaying necessary evacuations for residents.
  • Fatal Consequences: two workers died in the initial event while twenty eight emergency responders died from acute radiation poisoning shortly thereafter.
  • Bureaucratic Failures: central planning prioritised meeting political party goals over safety leading to widespread construction defects and shoddy operational practices.
  • Intelligence Oversight: secret police reports documented numerous technical warnings and construction violations for years yet failed to implement corrective safety measures.
  • Health Impacts: rigorous studies indicate that while thousands of thyroid cancer cases were linked to the disaster overall mortality rates remain far lower than alarmist projections.
  • Global Repercussions: misguided anti nuclear policies prompted by the incident stalled clean energy adoption and increased dependence on fossil fuels over subsequent decades.

Listen to this article
17 min

The world's worst nuclear disaster began 40 years ago at 1:23 a.m. on April 26, 1986, when Unit 4 at the Chernobyl nuclear power generation facility experienced an explosion and meltdown. Ironically, the explosion was caused by a botched safety test.

The point of the test had been to see what would happen if the power plant lost its main electrical supply: Could spinning turbines generate enough power to run the coolant pumps until emergency backup diesel generators could kick in? The experiment had failed three times previously, but never as catastrophically as it did that night.

Before the meltdown, Soviet officials had bragged regularly about the safety of their nuclear power plants and disparaged those in the West. In 1983, state-sponsored news agency Novosti reported that Soviet scientists had estimated the probability of a nuclear accident involving a radioactive discharge at one in 1 million. In 1984, Minister of Power and Electrification Petr Neporozhny called the country's nuclear plants "totally safe." Just two months before the disaster, the English-language propaganda magazine Soviet Life claimed: "Even if the incredible should happen, the automatic control and safety systems would shut down the reactor in a matter of seconds. The plant has emergency core cooling systems and many other technological safety designs and systems."

Soviet officials initially tried to hide the disaster, but it was detected in the West two days later when an employee's contaminated shoes triggered radiation alarms at Sweden's Forsmark Nuclear Power Plant. The Swedes at first feared that their own plant was leaking radiation, but they soon traced the issue back to Chernobyl by analyzing wind patterns and specific radioactive isotopes.

Chernobyl's radioactive plume spread over Belorussia, Ukraine, western Russia, and much of Europe. Two workers died from the initial explosion, and the 28 firefighters and emergency workers who doused most of the reactor's flames in the following three and a half hours died over the next three months from acute radiation poisoning. Their bodies were so radioactive that they were buried in lead coffins encased in concrete.

Anatomy of a Meltdown

The Chernobyl explosion is "the only accident in the history of commercial nuclear power to cause fatalities from radiation," as the Nuclear Energy Institute points out. "It was the product of a severely flawed Soviet-era reactor design, combined with human error."

Chernobyl's RBMK-1000 reactors—the initials stand for reaktor bolshoy moshchnosty kanalny, which means high-power channel reactor—used a combination of graphite and water as moderators to slow down fast neutrons. Slowing neutrons enables them to collide with fissile materials (uranium) to sustain a nuclear chain reaction, and that reaction produces heat that boils the water that turns the turbines that generate electricity. Chernobyl's reactors had a critical flaw known as a "positive void coefficient," in which coolant water's moderating effect on reactivity decreased when it was turned into steam, leading to uncontrollable power spikes.

Before the test was to begin, the reactor was supposed to be stabilized at the raw heat output of 700–1,000 megawatts thermal. But shortly after midnight, the power fell to just 30 megawatts thermal. Operator efforts to boost power back to the level originally planned for the test were stymied by increases in neutron-absorbing xenon and steam condensing into coolant water. To compensate, the operators withdrew neutron-absorbing boron carbide control rods to increase reactivity, raising reactor output to 200 megawatts thermal.

At 1:23 a.m., operators cut the regular steam supply. The coolant water supplied by pumps powered by the slowing turbines began to boil into steam. This led to a rapid feedback loop where rising reactivity produced more steam and burned off xenon, causing ever greater reactivity. The result was an overwhelming power surge, estimated to be 100 times the nominal power output.

Thirty-six seconds after the test began, someone tried to shut down the reactor by lowering the boron control rods. The operators apparently didn't realize that the control rods were tipped with graphite, which enhanced rather than reduced reactivity as they began their descent into the reactor.

Plant operators reported two explosions at 1:24 a.m. The first was caused by the extreme buildup in steam pressure, the second by hydrogen accumulation from zirconium-steam interactions. Together, they blew off the reactor's 2,000-ton cover plates and completely destroyed the reactor core. Since the reactor was not enclosed within a containment structure, these explosions spewed roughly 50 tons of radioactive substances into the atmosphere and across the landscape.

As Chernobyl disaster lead investigator Valery Legasov later observed, "The only way of stopping the nuclear reaction was for the reactor to rearrange itself: which it did."

Later that day, a memo from the Ministry of Energy to the Communist Party Central Committee reported that an explosion had occurred and that the resulting fire had been extinguished by 5 a.m., which was not true. The Ministry of Health decided that "the adoption of special measures, including evacuating the population from the city, is unnecessary."

As a consequence, Soviet officials delayed 36 hours before ordering the evacuation of the 45,000 residents of the nearby town of Pripyat. By May 14, about 116,000 people had been evacuated from a 19-mile radius of the reactor. Ultimately, around 220,000 people were resettled into less contaminated areas. The largely uninhabited Chernobyl exclusion zone today encompasses an area of around 1,600 square miles—larger than the state of Rhode Island.

Soviet military helicopters dropped 5,000 tons of boron, lead, sand, clay, and polyvinyl acetate glue onto the still-smoking reactor remains to seal it off. After 1,800 sorties over nine days, the reactor's graphite core fire was contained by May 4.

By November, the reactor remains were encased in a 300,000-ton structure of concrete and steel dubbed the "sarcophagus." This was meant as a temporary measure, but it wound up being more temporary than intended: Under the assault of rain and cold, the sarcophagus began rapidly deteriorating. To remedy this problem, international donors supplied over $2 billion to build the New Safe Confinement structure. Completed in 2019, it is the largest moveable land-based structure ever built and was designed to last 100 years. But in February 2025, a Russian drone punctured the structure's roof. Consequently, an International Atomic Energy Agency (IAEA) inspection in November 2025 found that the structure has "lost its primary safety functions, including the confinement capability."

How Central Planning Produced Defective Reactors

Behind the bad design and human error at Chernobyl, a deeper pair of problems was lurking: central planning and totalitarian secrecy.

The Soviet system put economic decisions in the hands of planners far removed from both the data people need to make decisions and the immediate consequences of their actions. Gosplan, the economic planning bureau, initially determined that nuclear power was unnecessary because the country had more than enough fossil fuels to produce electricity. When it became clear in the late 1960s that they had miscalculated, the energy planners rushed the development of nuclear power. In the process, they neglected to include the containment buildings used in the West, which are designed to prevent the escape of radioactive materials even during severe accidents.

Containment, you see, would have increased the costs of the plants by 25 percent to 30 percent. The "leaders of the Soviet energy sector faced a choice between disrupting the Party's five-year development plan if they built expensive nuclear facilities or abandoning the project altogether," a group of Russian researchers noted in 2025 (originally in Russian). "Priority was given to the solution that was safe for the officials, but which subsequently created a threat to people's lives." After the disaster, an IAEA engineer told the Los Angeles Times that if the Chernobyl reactor had been housed within a standard Western-style containment structure, "it probably would have made a huge difference." Even if an explosion breached a containment structure, most of the radioactive particles would nevertheless have been trapped.

The Soviet nuclear power industry was plagued by shoddy workmanship, bureaucratic infighting, and a shortage of trained personnel. The Canadian historian David Marples summed up some of the problems in his 1986 book Chernobyl and Nuclear Power in the USSR: "lack of quality control, an unskilled and dissatisfied workforce, supply problems, defective equipment, lagging construction, plans arriving late, design changes and cost overruns." More broadly, the pervasive fear of authority that the Soviet system fostered created a compliance-driven and blame-shifting operational culture.

In order to address the perennial problem of bureaucratic ass covering, the KGB—the secret police—embedded a network of spies at places like Chernobyl. These agents in the workforce observed the ongoing problems many times over the years. A 1973 confidential memo cited "serious inadequacies" with the concrete and steel reinforcement used in constructing the Chernobyl reactors. (It also noted that theft of materials from the site was rife.) In 1978, six months before the second reactor began operation, a secret report described "gross violations of technical construction standards, fire safety, and construction and assembly safety technique, which will lead to unfortunate situations." A 1982 report noted an area contaminated by accidental release of radioactive isotopes from Chernobyl's Unit 1 reactor was being handled by "burying them with earth and leaves." It added that the KGB was "performing operations to prevent the spread of panic, provocative rumors and other negative incidents in connection with this occurrence."

A May 1983 memo from a KGB lieutenant colonel concluded that Chernobyl's atomic energy stations "at the present time are the most dangerous with regards to their future use, which could have alarming consequences." (This text was underlined in the original document.) In 1984, a series of KGB reports detailed the dangerous technical flaws in Chernobyl's reactors, along with ongoing operational sloppiness by plant workers and their managers.

None of this secret reconnaissance prevented the disaster.

Even after the meltdown, the KGB kept trying to stop the flow of information. A July 1986 memo stamped as secret, among other things, information about the "true causes of the accident," the "quantities and content of the mixture [of radioactive particles] ejected at the time of the accident," and "mass poisoning and epidemic sickness rates connected to the accident."

This allergic reaction to the idea of transparency prevented people from getting important information on time. But it was standard operating procedure. The Soviets had already suffered several previous large radiation release accidents, including a 1957 nuclear waste explosion at a weapons production facility at Chelyabinsk that exposed 270,000 people to high levels of radiation. But these were kept secret. Indeed, Chernobyl's deputy chief engineer later testified that he had not been told of any of the previous accidents at similar reactors. Minister of Power and Electrification Anatoly Mayorets epitomized this approach when he decreed in 1985 that information regarding any adverse effects of the energy industry on employees, people, and the environment was not suitable for publication or broadcast.

"It has been said that experience is learning from mistakes; and bitter experience is learning from one's own mistakes," argued the Harvard nuclear physicists Alexander Shlyakhter and Richard Wilson in their 1992 account of the Chernobyl tragedy. "Secrecy then, is inimical to safety, for with secrecy about accidents, one can only learn from one's own mistakes and not from the mistakes of others….Excessive secrecy is characteristic of all totalitarian regimes and is one of their principal weaknesses."

Unsurprisingly, Soviet leader Mikhail Gorbachev waited weeks to deliver a televised speech acknowledging and addressing the Chernobyl crisis.

Photo: An abandoned school in the exclusion zone, 2026; Louis Lemaire-Sicre/Le Pictorium/Alamy

The Health Effects

Chernobyl was a disaster, but it wasn't as disastrous as some antinuclear activists have claimed. To take one infamous example, a 2009 report initially organized and funded by Greenpeace claimed that Chernobyl led to about 985,000 deaths from April 1986 through the end of 2004. The report ominously added, "The number of Chernobyl victims will continue to grow in the next several generations."

More comprehensive reports have discredited Greenpeace's claims, finding that mortality rates associated with the Chernobyl disaster are, fortunately, lower by orders of magnitude. An analysis by researchers at the International Agency for Research on Cancer calculated that the radioactive fallout from Chernobyl would cause 16,000 cases of thyroid cancer and 25,000 cases of other cancers, resulting eventually in 16,000 deaths throughout Europe by 2065. A 2008 report by the United Nations Scientific Committee on the Effects of Atomic Radiation concluded that while "those exposed to radioiodine as children or adolescents and the emergency and recovery operation workers who received high doses are at increased risk of radiation-induced effects, the vast majority of the population need not live in fear of serious health consequences from the Chernobyl accident."

Similarly, the Israeli radiation researcher Yehoshua Socol concluded in 2015 that there "is little scientific evidence for carcinogenic, mutagenic or other detrimental health effects caused by the radiation in the Chernobyl-affected area, besides the acute effects and small number of thyroid cancers." A 2019 study of solid tumor trends in Ukraine 30 years after Chernobyl found "rates of solid organ malignancy in the five regions most affected by fallout did not substantially differ from national patterns."

One big initial concern was that the uptake of radioactive iodine released by the Chernobyl explosion would increase the rate of thyroid cancers. Iodine is essential to the creation of the thyroid hormones that regulate a body's energy use, and growing children and adolescents drinking fresh milk more readily absorb iodine. When radioactive iodine fell on pastures grazed by local milk cows, it got into Soviet food supplies. Given that radioactive iodine has a half-life of eight days, warning people to refrain from drinking fresh milk for a while would have protected children in the region from effects of Chernobyl contamination.

But Soviet secrecy again won out. According to Shlyakhter and Wilson, "appeals by private individuals in south-eastern Belorussia to children not to drink milk in the first weeks of May 1986 were stopped on the grounds that the appeals might cause panic." A 2024 analysis by two Polish researchers calculated that about a quarter of the 19,000 cases of thyroid cancer detected in Russians, Ukrainians, and Belarusians who were under age 18 at the time of the Chernobyl disaster can be traced to ingesting radioactive iodine spewed by the reactor.

Shlyakhter and Wilson worried the world would learn the wrong lessons from Chernobyl. They pointed out that the explosion of a fertilizer plant in Bhopal, India, in 1984 had a far greater effect on public health than the nuclear disaster did: It killed 22,000 people directly and exposed another 570,000 to high levels of toxic gas. Yet no one demanded an end to the fertilizer industry. "In our view, therefore, it would be erroneous and in the long term possibly even disastrous to conclude that the world should not have nuclear power," they argued. "This would badly reduce the world's options in coping with human poverty and other needs, and with reducing the global environmental changes that may arise from the excessive burning of fossil fuels."

They were right. Chernobyl supercharged the anti–nuclear power movement, especially in Europe and the United States. As a consequence, nuclear power plant construction stalled around the world, resulting in more deaths from air pollution than would otherwise have occurred—plus increased greenhouse gas emissions that contribute to rising average global temperatures.

Nuclear power wasn't the problem in Chernobyl. The problem was communism.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories