Strategic Initiatives
12249 stories
·
45 followers

Don’t Bet on Unions. Competition is a Better Cure

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Political Grandstanding: the mayor promotes ineffective collective bargaining schemes to consolidate power while ignoring economic reality
  • Economic Obsolescence: organized labor artificially inflates wages without increasing productivity leading to stagnant growth
  • Corporate Illusion: belief that businesses absorb all costs is a naive misunderstanding of market mechanics and price pass-through
  • Job Destruction: monopolistic labor practices historically correlate with the decline of key industries and manufacturing employment
  • Exclusionary Tactics: current systems prioritize existing members at the expense of new workers seeking employment opportunities
  • Cost Burden: higher wage mandates inevitably manifest as increased prices damaging the purchasing power of the average consumer
  • Market Flexibility: competitive labor environments provide greater long-term job security through investment and business formation
  • Rational Reform: true economic prosperity requires competitive tax policies and deregulation rather than bureaucratic union expansion

Courtesy Michael M. Santiago/Getty Images

On Sunday, New York City Mayor Zohran Mamdani spoke at a rally for the launch of Union Now, a new organization aimed at expanding organized labor’s reach. It’s the latest in a series of early pro-union moves by the mayor, including joining a nurses’ strike picket line and pushing to unionize tenants.

Mamdani’s approach is misguided. Competition, not unionization, is the surer path to improving conditions for both workers and tenants. In fact, more unions will mainly accomplish one “goal”: consolidating political support for policies that ultimately make the affordability crisis worse.

It is well established that labor unions—which secure exclusive bargaining rights over workers’ terms of employment—can raise wages for their members relative to what they would otherwise have been paid. But unless matched by increases in productivity, the gains amount to rent seeking, extracting benefits without greater output. A 2025 National Bureau of Economic Research study, for example, found that most of the wage gap between unionized and nonunionized workers reflect unions’ ability to extract more rents for workers, rather than underlying productivity differences.

Who bears these costs? At Sunday’s rally, Sara Nelson, president of the Association of Flight Attendants-CWA, suggested they fall largely on management and corporate profits. “Too often, the boss has all the power to starve workers during a fight,” she said. “Union Now will work with unions directly to ensure workers have the means to win.”

In reality, the burden is often more diffuse. In highly competitive industries, where firms have limited ability to pass on higher labor costs, those higher costs might indeed take the form of lower profits. A 2011 study estimates that, if the entire economy were unionized, profits would fall by 20 percent.

But that is not necessarily a win for workers. Lower expected profits tend to reduce investment, slow business formation, and push jobs to places where compensation more closely reflects productivity (or offshores them entirely). A 2025 Mercatus Center study of the Rust Belt echoes this pattern, finding that “[u]nions wielding monopoly privileges and fueling strikes and labor conflicts were responsible for 55 percent of the region’s decline in US manufacturing employment.”

For poorer workers, the effect is especially pronounced. Unions’ use of coercive power to extract higher wages for existing workers comes at the expense of outsiders who might otherwise have worked for less. The result is higher unemployment and fewer opportunities. Empirical data show that private-sector job growth in right-to-work states has been far more robust than in non-right-to-work states.

Then there’s the other side of the equation: since only a portion of union-driven cost increases comes out of profits, much of the rest is borne by consumers in the form of higher prices. And since most people are both workers and consumers, higher wages are often offset by higher prices.

A more promising approach is to encourage competition. Moving away from New York’s union-dominated labor markets—through policies like right-to-work laws, which eliminate mandatory union membership—aligns wages more closely with productivity and expands opportunities for workers. Just as important, it lowers costs for consumers by reducing the upward pressure on prices.

The benefits show up in investment and job creation. States with more flexible labor markets tend to attract more business investment, which in turn drives employment growth. The job security unions provide—making it harder to dismiss workers—is real for those inside the system. But a more dynamic labor market offers a different kind of security: the ability to find new opportunities quickly, often on terms that exceed standardized union contracts.

Right-to-work laws aren’t the only path forward for New York, though they would be a step in the right direction. The state ranks last on the Tax Foundation’s Competitiveness Index, and the group’s 2026 report outlines several ways to improve that standing. Fully repealing New York’s capital stock tax, allowing full expensing of investments in machinery and equipment, and cutting corporate taxes are just a few sensible measures the state could adopt to expand investment and give workers more options.

While unions fall short as an economic solution to New York’s affordability crisis, they have nonetheless proven effective as a political force. That helps explain Mamdani’s emphasis on expanding them. But for New Yorkers concerned with rising costs, the priority should be on policies that increase competition, attract investment, and expand opportunity rather than on merely redistributing a shrinking pie.

Share

Read the whole story
bogorad
55 minutes ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Scientists Identify The Most Dangerous Time in Life to Gain Weight : ScienceAlert

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Statistical Correlation Claims: early weight gain is vaguely associated with higher mortality risks compared to later life weight gain.
  • Arbitrary Age Groupings: the researchers focused on obesity onset between ages 17 and 29 to manufacture their alarming 70 percent mortality statistic.
  • Reliance On Datasets: the study utilizes secondhand records from 600,000 individuals rather than conducting rigorous direct physical interventions.
  • Obsolete Metric Dependency: the findings rely heavily on bmis, a notoriously imprecise tool that fails to differentiate between fat and muscle.
  • Speculative Biological Mechanisms: the researchers guess that theoretical wear and tear or internal stress might explain why some people die earlier than others.
  • Convenient Omissions: factors like diet and exercise were ignored, leaving the study to make broad assumptions without accounting for basic lifestyle choices.
  • Predictable Bureaucratic Narrative: the authors inevitably push for more political intervention to regulate public weight rather than trusting individual agency.
  • Admission Of Uncertainty: the lead researcher concedes the numerical risk figures are not accurate and advises against taking these statistics too seriously.

A new study suggests it's not just gaining weight that affects our health over a lifetime, but also when we put on the pounds, with weight gain during early adulthood more strongly associated with mortality risk.

Participants who first developed obesity between the ages of 17 and 29 were around 70 percent more likely to die of any cause during the follow-up period, compared with those who hadn't developed obesity by the age of 60.

Led by a team from Lund University in Sweden, the study was designed to track weight over time rather than relying on a single snapshot. Information on more than 600,000 people was obtained from an existing dataset, including only those with at least three recorded weight measurements between ages 17 and 60.

While the study doesn't show that the early weight gain was specifically responsible for the deaths, rather than any other factor, we know that obesity is linked with a host of health problems.

"The most consistent finding is that weight gain at a younger age is linked to a higher risk of premature death later in life, compared with people who gain less weight," says epidemiologist Tanja Stocks, from Lund University.

According to the researchers, it's possible that spending more years living with the biological stress of being overweight, with the body under more pressure and at a greater risk of wear and tear than normal, is a reason for the earlier deaths statistic.

The team tracked overall mortality and deaths linked to numerous obesity-related conditions, including cardiovascular diseases, several kinds of cancer, and type 2 diabetes.

Subscribe to ScienceAlert's free fact-checked newsletter

Obesity onset was defined as the first time that a recorded body mass index (BMI) reached 30 or higher. BMI was standard practice at the time the weight measurements were taken, but definitions of obesity are evolving.

In addition to the primary finding about weight gain in early adulthood, several other associations are worth noting. As expected, those who gained the most weight across any and all ages were more likely to die during the study period.

Cardiovascular diseases, including heart attacks and stroke, accounted for the largest share of these associations.

"Our findings of higher all-cause and cardiovascular disease mortality associated with early weight gain and obesity onset suggest that the duration of obesity, rather than weight gain in late adulthood, may be the key factor underlying risk," the researchers write in their published paper.

"Long-term exposure to insulin resistance, inflammation, and hypercoagulation due to adipocytokines released from adipose tissue likely contribute to these risks."

Deaths from type 2 diabetes and certain cancers were also linked to obesity, but a few causes of death – including bladder cancer in men and stomach cancer in women – showed no connection, statistically.

There were differences between men and women, too.

For cancer in women, the increased risk of premature death linked to obesity was roughly the same regardless of when the weight gain occurred. This suggests another factor is more significant here than elsewhere – perhaps hormonal changes related to menopause.

Age chartsMale and female weight gain was tracked over the course of adult lifetimes. (Le et al., eClinicalMedicine, 2026)

"If our findings among women reflect what happens during menopause, the question is which came first: the chicken or the egg?" says epidemiologist Huyen Le, from Lund University.

"It may be that hormonal changes affect weight and the age and duration over which these changes occur – and that weight simply reflects what's happening in the body."

There are limitations to mention here. Exercise and diet weren't accounted for and may well have played a role in the mortality rates observed by the researchers – we know they're also crucial to overall health.

Adding data on these factors could be an option for future study, the study authors note, as could examining fat distribution, which newer definitions of obesity do, and distinguishing between fat and muscle mass.

But with so many participants involved and weight tracked over several years for each one, the researchers suggest these are important findings for public health: that preventing obesity should be done as early in life as possible.

Related: Study Links 2 Simple Eating Habits to Lasting Lower Weight

To put the mortality risk discovered here into numbers: if 10 out of every 1,000 participants without early obesity died over the follow-up period, about 17 in 1,000 died among the group who developed early obesity.

"We shouldn't get too hung up on exact risk figures," says Stocks. "They are rarely entirely accurate, as they are influenced, for example, by the factors taken into account in the study and the accuracy with which both risk factors and outcomes have been measured."

"However, it's important to recognize the patterns, and this study sends an important message to decision-makers and politicians."

The research has been published in eClinicalMedicine.

Read the whole story
bogorad
59 minutes ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

China Bans Meta’s Acquisition of Manus on National Security Grounds - WSJ

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Regulation Order: china national development and reform commission mandates the reversal of the meta acquisition of manus ai.
  • Security Grounds: government officials cite national security concerns as the primary justification for banning the technology deal.
  • Startup Origin: the ai firm was initially developed by engineers at a chinese entity known as beijing butterfly effect technology.
  • Operational Relocation: the startup moved the majority of its staff to singapore following venture capital investment prior to the meta purchase.
  • Corporate Acquisition: meta completed the acquisition process in late december before facing immediate regulatory scrutiny in january.
  • Executive Restrictions: national authorities prevented manus cofounders residing in beijing from leaving the country during the investigative process.
  • Policy Objectives: regulatory intervention aims to suppress business departures and strictly control the export of domestic artificial intelligence assets.
  • Ongoing Dispute: affected parties anticipate further resolution processes while working to address the status of remaining domestic business operations.


A laptop screen displaying the Manus AI logo and "The general AI agent" text.Manus has developed an AI agent that can carry out tasks such as writing in-depth research reports. Raul Ariano/Bloomberg News

China has ordered that Meta Platforms’ META 2.41%increase; green up pointing triangle $2.5 billion acquisition of artificial-intelligence startup Manus be unwound.

China’s National Development and Reform Commission, which has the authority to review foreign investments, said Monday that it has banned the acquisition and ordered it to be rescinded on national security grounds.

“The transaction complied fully with applicable law,” a Meta representative said in an email statement. “We anticipate an appropriate resolution to the inquiry.”

Manus has developed an AI agent that can carry out sophisticated tasks such as writing in-depth research reports and preparing presentation slides. Early versions of Manus were created by engineers at Beijing Butterfly Effect Technology, which was founded in China in 2022. 

Last year, a Singapore-based entity, also called Butterfly Effect, took over the operations of the AI agent product in markets outside China. Manus later relocated most of its China-based employees to Singapore after receiving investment from a California-based venture-capital firm.

Then, in late December, Meta acquired Manus. Days later, Chinese authorities announced a review of the deal in January, saying that cross-border acquisitions and the export of technology must comply with the law. 

Last month, Chinese officials called the two co-founders of Manus—Xiao Hong and Ji Yichao—in to discuss the deal in Beijing. The executives, who currently work for Meta, were later told not to leave the country during the investigation, The Wall Street Journal has reported. Many of Manus’s top executives are Chinese nationals.

According to Chinese law, any foreign investments that may carry a national-security risk may be subject to review by the authorities. Chinese authorities believe they have the power to demand that the deal be unwound because Beijing Butterfly Effect Technology remains a Chinese company.

After the acquisition, Meta said that there would be no continuing Chinese ownership interest in Manus and that the startup would discontinue its China-based services and operations. Manus was working to close Beijing Butterfly Effect Technology, but hasn’t yet done so.

The series of events that culminated in Meta’s purchase of China-originated technology has angered Chinese regulators. They are worried that the steps Manus took would spur other Chinese companies to follow suit and move out of China without Beijing’s approval. 

The inquiry into the Manus sale reflected Beijing’s broader attempts to protect the country’s AI know-how at a time when the U.S. and China are locked in an intensifying technology race. Both countries have tightened export controls, restricted the exchange of tech professionals and curbed cross-border investment.

Manus and Xiao didn’t immediately respond to requests for comment. Ji couldn’t be reached for comment.

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Raffaele Huang is a reporter for The Wall Street Journal in Singapore, covering Asia’s technology companies. Previously, he mainly focused on corporate news in the automotive and technology sector in Beijing.


Up Next


Videos

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Only antimatter provides the energy we need for interstellar travel - Big Think

1 Share

LLM (google/gemini-3.1-flash-lite-preview-20260303) summary:

  • Interstellar Challenges: reaching distant star systems requires overcoming immense obstacles regarding distance time speed and propulsion efficiency.
  • Energy Limits: current chemical rockets are inefficient while nuclear fission and fusion offer only marginal improvements in energy conversion.
  • Antimatter Potential: matter antimatter annihilation represents the most efficient possible fuel source by converting total mass into energy.
  • Production Hurdles: existing laboratory methods for creating antimatter have only produced minute quantities far below the requirements for spaceflight.
  • Storage Requirements: maintaining stable antimatter containers is technically demanding and current traps are insufficient for holding large amounts.
  • Thrust Generation: redirecting high energy gamma ray photons via grazing angle mirrors is essential to convert annihilation into useful acceleration.
  • Spacecraft Strategy: practical interstellar travel necessitates continuous acceleration at constant rates followed by symmetrical deceleration to reach destinations within human lifespans.
  • Propellant Mass: matter antimatter technology eliminates the crushing weight of traditional chemical or fusion fuel by utilizing maximum possible exhaust velocity.

As humanity basks in the aftermath of the unprecedented success of Artemis II, which took humans back to the Moon for the first time in 54 years and brought them farther from Earth than ever before, many of us can’t help but think about grander goals. As a species, we don’t just dream of returning to the Moon, but of heading to places we’ve never been: other planets, other star systems, or even other galaxies. However, there are big problems we have to solve if we ever want to send humans outside of the Solar System: the problems of distance, time, speed, and fuel efficiency.

Interstellar distances are huge, even compared to the vast interplanetary distances we encounter in the Solar System. With current rocket technology, it would take hundreds of human lifetimes to reach even the nearest star, and that’s because we’re limited by speed, which is in turn limited by the efficiency of our fuel sources. Chemical-based rockets leverage quite efficient fuel sources, like liquid oxygen and liquid hydrogen, but transform less than a millionth of the fuel’s rest mass into energy. If we went to nuclear fission-powered propulsion, we could transform about a thousandth of our fuel source’s rest mass (0.1%) into energy, while nuclear fusion-powered propulsion could convert up to almost a hundredth (0.7%) of its fuel source’s rest mass into energy.

But the ultimate fuel source would be a matter-antimatter annihilation: It’s 100% efficient. Barring the discovery of some new law of physics, only antimatter provides the power we’d need for realistic interstellar travel.

A rocket launches at night, emitting bright flames and smoke, its reflection visible in the water below—a powerful symbol of hope as Artemis II sets out on its journey from Earth to the moon.
The launch of Apollo 17, the ninth and final crewed mission to the Moon as part of the Apollo program, was also the first nighttime liftoff of a Saturn V rocket, occurring on December 7, 1972. The Saturn V remains the heaviest launch vehicle in history, capable of carrying the greatest masses to low-Earth orbit of any rocket ever.
Credit: NASA

Three main challenges arise in the endeavor to use antimatter as rocket fuel, all of which must be overcome if we actually want to have humans embark on an interstellar journey.

  1. The creation of antimatter. We know how to do this one in laboratory settings, and although it does require much more energy to make the antimatter than we eventually release from its annihilation, that’s not really a problem. What is a problem is that we would have to make antimatter in large amounts. If you add up all the antimatter ever made in all the labs in the history of Earth, you end up with just about a microgram’s worth of antimatter. We’d need many millions of times more to power an interstellar journey.
  2. The storage of antimatter. The very thing that makes antimatter such a fantastic fuel source — its propensity for annihilating with any normal matter that it contacts — makes it a liability for use as a fuel source. Somehow, we have to store this antimatter in a safe, stable way, and then transport it into a place where it undergoes a controlled annihilation with an equal-and-opposite amount of normal matter.
  3. The usability of energy derived from matter-antimatter annihilation. Assuming we can overcome these first two problems, we then have to turn that energy of annihilation into useful thrust: ideally by shunting the post-annihilation particles in the opposite direction we want the spacecraft to accelerate.

Let’s look at these problems a little more in depth, one at a time.

Two glowing spheres, one red and one green, face each other in space with a wavy line of light—like a particle physics collision—connecting them against a speckled dark background reminiscent of the last collider’s discoveries.
Whether two particles collide inside an accelerator or in the depths of space is irrelevant; all that matters is that we can detect the debris of what comes out, including newly created “daughter” particles. Although the flux of high-energy particles in space is lower, the achievable energies are far greater than in terrestrial laboratories. Under high-energy conditions, new particle-antiparticle pairs can be generated during these events, including both fundamental (quark, lepton) particles and composite (baryon, meson) ones.
Credit: flashmovie / Adobe Stock

We know how to create antimatter in labs; you simply smash particles together at high energies. For example, the easiest way to make an antiproton is to smash two protons together at high-enough energies so that, after the collision, there’s enough “extra” energy, via Einstein’s E=mc², to make an extra proton-antiproton pair. Protons are a dime-a-dozen, so we don’t really care about keeping them — the antiprotons are the antimatter that we’re after. Therefore, we design electric and magnetic fields to bend and confine them so that they don’t annihilate away with the first proton they encounter.

The Large Hadron Collider (LHC) at CERN is the location of the highest-energy proton-proton collisions, but we’re better off using lower-energy collisions to create antiprotons. The reason is that if we create an antiproton from a high-energy collision, the antiproton will have a lot of kinetic energy, making it difficult to control, confine, or keep from running into something that contains a proton. In fact, the record-setting collider prior to the LHC, the Tevatron at Fermilab, used to make antiproton beams that would collide with protons in their accelerator, leading to incredible fundamental discoveries in the late 20th century, including the top quark.

Just as an atom is a positively charged, massive nucleus orbited by one or more electrons, antiatoms simply flip all of the constituent matter particles for their antimatter counterparts, with positron(s) orbiting the negatively-charged antimatter nucleus. The same energetic possibilities exist for antimatter as matter. First hypothesized in 1928/9 by Dirac, antimatter (in the form of positrons) was first detected in the lab only a few years later: in 1932.
Credit: Katie Bertsche/Lawrence Berkeley Lab

We know how to store antimatter in labs as well. Electric and magnetic fields are used to bend charged particles, and if you know the mass, charge, and kinetic energy of the antimatter you’ll be creating, you can leverage those fields to store antimatter indefinitely. First, you slow the particles down, and then you create a near-perfect vacuum inside a cavity: a region devoid of normal matter except for the container walls. You then set up a series of quadrupole electric fields and uniformly homogeneous magnetic fields inside the cavity to confine particles of a uniform mass and charge within it: a Penning trap.

However, Penning traps are really only useful if you have small numbers of particles to confine. If you have too many particles, the mutual repulsion from having so many like charges together will “push” many of those charges into the walls of the container anyway, causing you to lose the antimatter particles you worked so hard to create and confine. While it’s a tremendous achievement that we just successfully transported antimatter (on a truck!) for the first time, the fact is there were only 82 antiprotons inside the transported Penning trap. (The trap itself weighed over 1,000 kg, and yet it was the most compact large-scale Penning trap ever built.) For a kilogram of antimatter, there would be more than 1027antiprotons to contain: far too many to confine with a setup like this.

Workers wearing helmets supervise the loading of a large container—housing equipment for antimatter interstellar travel—into a truck using lifting gear inside an industrial facility.
On March 23, 2026, antimatter was successfully transported, via truck, without being destroyed or lost. A device known as a Penning trap, here being loaded onto a transport truck, contained nearly 100 antiprotons within it, all of which were successfully accounted for upon the truck’s completion of its journey. This also represents the smallest, lightest such Penning trap ever successfully operated.
Credit: CERN; Multimedia Production Team, MPT; Arnold, Melanie; Brice, Maximilien

We’re ultimately going to have to take a different approach for antimatter containment, but that may be possible thanks to an experiment conducted in the 2010s and 2020s: the ALPHA-g experiment. Its goal was to:

  • create large numbers of neutral antiatoms,
  • pin them so that they wouldn’t annihilate with the container walls,
  • release them so they could experience Earth’s gravitational field,
  • and measure whether antimatter fell down (as predicted), fell up (which would’ve enabled warp drive), or did something entirely different.

ALPHA-g was a success. It not only proved that antimatter gravitates the same way that normal matter does, but also demonstrated that antiatoms could be created, controlled, and stored, at least temporarily. To usefully apply this to space travel, we’d need to find a way to create large numbers of antiatoms and keep them in a compressed state, where they wouldn’t smash into container walls and could be stored long-term. 

Penning trap experiments (like BASE) and antiatom experiments (like ALPHA-g) represent significant progress toward the goal of antimatter storage, but we still have a long way to go to turn this science-fiction technology into science fact.

antimatter gravity
The ALPHA-g detector, built at Canada’s TRIUMF facility, was oriented vertically and filled with neutral antiatoms confined by electromagnetic fields. When the fields release, most antiatoms will randomly fly away, but a few that happen to be at rest will have the opportunity to move solely under the influence of gravity. If they would have fallen up, many speculations that had previously been constrained to the realm of science-fiction would have become plausible. As the experiment showed, however, antiatoms fall down in a gravitational field, killing our best hope for antigravity and warp drive technologies.
Credit: Stu Shepherd/TRIUMF

And then, once you create and store a large quantity of antimatter, you’ll need some way to bring tiny amounts of that antimatter, a little bit at a time, into an “engine” with an equal-and-opposite amount of normal matter. When the matter and antimatter collide together, they’ll annihilate each other into pure energy (in the form of photons) in the exact inverse of the way the antimatter was first created: via Einstein’s E=mc². The problem then becomes what to do with the high-energy gamma-ray photons that arise from annihilation. If you don’t build anything special, they’ll simply smash into the walls of your spacecraft, ionizing atoms and causing damage, rather than generating thrust.

The key to controlling the photons comes from a surprising source: astronomy. In astronomy, we don’t use a conventional mirror to observe high-energy gamma-rays and X-rays; they would either pass through or be absorbed by that type of matter. Instead, we build a cavity filled with mirrors set at very shallow angles (what we call “grazing angles“), which can then be used to control where those high-energy photons wind up. By arranging a series of mirrors of the right material in this fashion, we could ensure that the photons created by matter-antimatter annihilation get shunted out of the rear of the spacecraft, providing thrust in the opposite (forward) direction as part of the equal-and-opposite reaction demanded by physics.

Cross-sectional diagram of X-ray telescope mirrors shows 4 nested parabolloids and hyperboloids focusing X-rays onto a focal surface, with labeled dimensions and viewing angle—an essential tool for antimatter interstellar travel research.
When very high-energy photons are produced, such as X-rays or gamma-rays, they cannot be focused by a conventional mirror; they would simply ionize the electrons present in the material, be absorbed by them, or pass right through them. Instead, if you wish to focus or redirect them, you must set up an array of grazing mirrors to reflect and/or focus that light in the desired direction. This setup shows the High Resolution Mirror Assembly aboard NASA’s Chandra X-ray Observatory, and it could be applied to a matter-antimatter annihilation chamber aboard a spaceship.
Credit: NASA/CXC/D.Berry

This, at last, converts your matter-antimatter annihilation into thrust: again, with up to 100% efficiency. Efficiency, in this case, means that 100% of your fuel (where 50% is matter and 50% is antimatter) gets converted into useful energy, which can be used to generate momentum-changing thrust and accelerate your spacecraft. This leads to a grand plan for reaching another star system:

  • use a big, heavy-lift conventional rocket to place a capsule with your crew, plus the matter, antimatter, and annihilation engine, into Earth’s orbit (make multiple trips if you need to),
  • then begin steadily accelerating toward your destination until you’re moving at your desired speed,
  • then coast,
  • then, at the halfway point, turn the ship to the opposite orientation,
  • then coast again until it’s time to begin decelerating,
  • and then steadily decelerate at the same rate you accelerated at earlier until you arrive at your destination.

If you can accelerate at 1 g (the gravitational acceleration experienced on Earth, 9.8 m/s²) for the first half of the trip, and then turn the ship around and decelerate at the same rate for the second half of the trip, it’s straightforward to calculate the amount of time it would take to reach a nearby star. Instead of tens or hundreds of thousands of years, the trip could be done in mere decades: short enough for a human crew to still be alive when they arrive at their destination. (Albeit, with no return trip possible.)

Accelerate twin round trips, exploring relativity.
If you were to get in a spaceship and accelerate at 1g (Earth’s acceleration) for the entirety of the trip, you could travel at nearly the speed of light after only a few years of acceleration. As you increased your speed ever closer to the speed of light, the effects of time dilation would get progressively more severe. In theory, distances of many light-years could be traversed in experienced times of much less than a year, but one must reckon with the weight of realistic fuel sources as well.
Credit: P. Fraundorf/Wikimedia Commons

However, there’s an enormous problem even if you did everything we mentioned up to this point:

  • generate all the needed antimatter,
  • develop antimatter storage,
  • develop and build a suitable matter-antimatter annihilation chamber,
  • and use a conventional launch vehicle to lift the payload and the fuel into Earth’s orbit.

The problem is this: the sheer amount of fuel that you’d need to make an interstellar journey happen. Let’s imagine we have a small payload of only 500 kg (1,102 pounds) including the astronauts and all of their food, water, and supplies. If you want to accelerate it at 1 g, you’d only need a small amount of matter-and-antimatter to annihilate: 16 milligrams worth, or 8 milligrams of matter with 8 milligrams of antimatter.

However, that will only get you one second’s worth of acceleration! If you want to accelerate for longer, you’ll need more fuel. With one gram of matter and one gram of antimatter, you can get two full minutes of acceleration. With 300 grams of matter and 300 grams of antimatter, you can get 10 hours of acceleration, which would get you up to speeds of 368 km/s — more than 82,000 miles per hour. That’s twice as fast as the maximum speed of the Parker Solar Probe, which, to date, is humanity’s fastest spacecraft ever flown.

A spacecraft travels at the fastest spacecraft speed record through bright, yellow-orange streaks of plasma and solar wind near the Sun.
This illustration shows the Parker Solar Probe approaching perihelion: its closest approach to the Sun. It achieved its closest approach ever on December 24, 2024, coming within just 4.43 solar diameters of the Sun’s photosphere. It became the fastest spacecraft ever created by humanity during that, and subsequent, perihelion passes.
Credit: NASA’s Goddard Space Flight Center/Scientific Visualization Studio

But if you want to reach even the nearest stars in a reasonable amount of time, you need to travel much faster. Remember, stellar distances are measured in light-years, with the closest star system still over four light-years away. If you want to get there in a couple of decades, you need to accelerate until you reach more than 20% the speed of light, which is more difficult than you might initially think.

Sure, you can do the math and calculate that, in order to reach 20% the speed of light (roughly 60,000 km/s), you’d need about 50 kg worth of antimatter and 50 kg worth of matter as fuel. It would take you approximately 10 weeks of constant acceleration to reach that speed. Then, you’d journey for approximately 20–25 years (the amount of time it would take you to traverse around 4 light-years at that speed), with the astronauts keeping themselves alive with that tiny payload inside. You’d then need to spend another 50 kg worth of antimatter and another 50 kg worth of matter in matter-antimatter annihilation to decelerate, finally coming to rest in the Proxima/Alpha Centauri system if you did your calculations correctly.

Only with maximum (~100%) fuel efficiency, which you can only achieve with matter-antimatter annihilation, do you avoid the catastrophic problem associated with all other propulsion strategies: the need to bring enormous amounts of fuel on board the spacecraft.

A diagram showing the different types of annihilation.
Whenever you collide a particle with its antiparticle, it can annihilate away into pure energy. This means if you collide any two particles at all with enough energy, you can create a matter-antimatter pair. But if the Universe is below a certain energy threshold, you can only annihilate, not create. The pathway to using antimatter as fuel in space involves producing it in copious quantities here on Earth, storing it, and then annihilating it with matter in a controlled fashion inside a spaceship’s reaction engine.
Credit: Andrew Deniszczyc/revise.im

This is the best that the conservation of energy will permit for a spacecraft, at least, given the limits of the laws of physics as we currently understand them. The next-best option to matter-antimatter annihilation, which is 100% efficient at converting matter into energy, is nuclear fusion as it works in the Sun — and that only converts 0.7% of its initial rest-mass into usable energy. Instead of needing 100 kg of antimatter to accelerate a 500 kg payload, you’d need at least thousands of tons (millions of kilograms) of hydrogen, plus a constantly-working fusion reactor, and you’d have to extract and jettison the spent fuel (helium) along the way. Remember: You don’t just need to accelerate/decelerate the mass that’s going to be arriving at your ultimate destination. You need to accelerate/decelerate your unspent fuel at every stage along your journey.

For all methods besides matter-antimatter annihilation, you’re inevitably wasting more than 99% of your mass, meaning that all of that “fuel” just sits there as useless heavy mass that you need to transport during your journey. Matter-antimatter annihilation is the only option for interstellar travel where most of what you’re bringing with you is payload, rather than propellant, and the effective exhaust velocity (because it’s purely in the form of photons) is the maximum possible: the speed of light.

The advances required will be substantial, but of all the fuel sources that we know of, only antimatter, ultimately, is capable of powering our dreams of becoming an interstellar civilization.

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

How Scientists Changed Their View of Insomnia | RealClearScience

1 Share
  • historical context: insomnia has affected humanity since ancient times although recent scientific insights provided significant advancements in understanding chronic sleep deprivation.
  • prevalence rates: nearly one third of the adult population in england currently reports frequent symptoms which qualifies insomnia as a widespread psychological concern.
  • independent condition: researchers shifted away from the secondary insomnia model and now recognise the condition as an independent disorder requiring its own targeted treatment.
  • health relationships: addressing sleep patterns can frequently lead to measurable improvements in coexisting physical and mental health issues like chronic pain depression and anxiety.
  • vulnerable groups: demographic factors such as being female older age and lower socio economic status correlate with higher risks of experiencing sleep disruption.
  • behavioral management: experts recommend avoiding prolonged time in bed while awake to prevent the brain from creating negative associations with sleep environments.
  • treatment efficacy: cognitive behavioural therapy for insomnia remains a primary recommendation for successful symptom management despite limited availability in traditional clinical settings.
  • medication risks: reliance on traditional sleeping tablets is discouraged due to long term side effects while newer pharmaceutical options still lack comprehensive long term safety data.

Insomnia may have been torturing humanity since ancient times, but over the last 20 years scientists have made progress in their understanding of chronic sleep deprivation.

Today, sleep deprivation is one of the most widespread reported psychological problems in Britain, with about a third of the adult population in England reporting frequent insomnia symptoms.

Insomnia rarely occurs on its own, which brings us to one of the biggest changes scientists have made in our understanding of chronic sleep deprivation. The vast majority of people with insomnia often have other mental and physical health conditions, like diabetes, hypertension, chronic pain, thyroid disease, gastrointestinal problems, anxiety or depression.

In its diagnostic history, insomnia coupled with another illness or disorder was called secondary insomnia. That meant that insomnia was considered a consequence of those other underlying conditions. As such, until fairly recently clinicians did not generally attempt to treat secondary insomnia.

But in the early 2000s, both research and clinical practice evidence started to indicate that this approach was wrong. Scientists argued that insomnia could precede or long survive a primary condition. Abandoning this distinction between primary and secondary insomnia was a major advance in acknowledging that insomnia frequently was an independent disorder, requiring its own treatment.

What’s more, researchers have been accumulating strong evidence that helping people with their sleeping problems could actually lead to improvements in their other health conditions. Chronic pain, chronic heart failure, depression, psychosis, alcohol dependency, bipolar disorder, PTSD, can all improve for patients if they address their sleeping problems.

Who gets insomnia?

Over the past two decades, we have acquired more rigorous and international data illustrating how ubiquitous insomnia is. Insomnia affects almost everyone, though women, older people, and people of lower socio-economic status are more vulnerable to it.

These groups experience a combination of biological, psychological and social risk factors that expose them to long-term sleep-disruption. For example, women often experience acute hormone fluctuations, pregnancy and birth, breastfeeding, menopause, domestic violence, caregiving roles, higher prevalence of depression and anxiety – all of which can lead to more opportunities for prolonged sleep disruption.

Some current issues in insomnia research include the need to understand different types of insomnia symptoms, and their relationship to health and performance risks. For example, there is evidence that difficulty initiating sleep (as opposed to difficulty staying asleep, or waking up too early in the morning) is associated with an increased risk of depression. Similarly, scientists still have questions on changes in things like brain activity, heart rate, or stress hormones that accompany insomnia. In common with all other mental health disorders, we are still yet to find biomarkers of insomnia.

However, research has helped us understand some things people can do to prevent insonmia episodes progressing to chronic insomnia, which is harder to treat. When insomnia symptoms happen more nights than not, and last for more than three months, then a diagnosis of insomnia disorder, or chronic insomnia, can be made.

Plasticine sheep jumping among clouds Insomnia keeping you up? Lizavetta/Shutterstock

One of the most common and harmful habits that develop during periods of insomnia is lying in bed, trying to sleep. Scientists have learned that lying in bed awake leads to perpetual cognitive arousal and, in time, it teaches your brain to stop connecting bed and being asleep.

Thus, if you cannot sleep at night, get up and do something else absorbing, but calming – read, write a list for the following day, listen to calming music or do some breathing exercises. When you feel sleepy again, get back to bed. If you are tired the following day, a well-placed short nap is fine, in the afternoon, for a maximum of 20 minutes. However, one must be careful with daytime sleeping, as it may reduce sleepiness at nighttime, and going to sleep may become even more difficult.

For those who do struggle with insomnia, there are effective treatments recommended. The story of the profound changes from secondary insomnia to insomnia disorder speaks of the power of clinical diagnosis in providing a pathway to treatment.

Cognitive behavioural treatment for insomnia (CBTI) is a package of techniques designed to maximise sleepiness at bedtime. It involves structured steps which aim to modify behaviour and mental activity. There are some predictors of treatment success: shorter duration of insomnia symptoms (years, rather than decades), less depression or pain and more positive expectations towards CBTI. But CBTI is broadly effective across all groups of people with insomnia.

Even so, only a tiny proportion of people reporting insomnia symptoms seek medical help. People may consider insomnia symptoms trivial or manageable, or they may be unaware of the options. It may also be due to the unavailability of treatment options. CBTI remains largely unavailable in clinical practice, mainly due to clinicians’ unfamiliarity with the treatment programme, and limited funding.

This pushes patients towards sleeping tablets, which are not an acceptable long-term solution. Sleeping tablets are associated with significant cognitive and motor impairment, increased risk of falls, dependence, tolerance and withdrawal symptoms, daytime lethargy, dizziness and headaches.

The main truly “new” class of sleeping pills are the dual orexin receptor antagonists (DORAs), which have shown a safety profile in many ways better than the traditional sedatives, especially around dependence concerns. But DORAs are not risk free or “mild” pills. They are relatively new to the market, first approved in the UK in 2022. So we lack long-term data to assess their safety for long-term use in people with insomnia.

A decent alternative is online self-delivered CBTI, on platforms such as Sleepful, which are free to access.

We have made great strides in sleep medicine over the past 20 years for people with insomnia, we just need to realise the potential of such profound changes by providing the right help for those suffering with it.The Conversation

Iuliana Hartescu, Senior Lecturer in Psychology, Loughborough University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read the whole story
bogorad
4 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

What I Learned From Teaching Darwin | RealClearScience

1 Share
  • Course structure: a graduate seminar in twenty twenty five examined darwinian thought through foundational texts and critical discussion
  • Darwinian methodology: scientific inquiry in the nineteenth century relied on descriptive prose and observation rather than modern statistical formalisms
  • Academic climate: higher education benefits from confronting difficult and offensive historical materials with rigor rather than avoidance
  • Human complexity: primary source study reveals a tension between scientific brilliance and the presence of harmful social hierarchies regarding race and gender
  • Science education: training future scientists requires developing judgment to navigate historical prejudice while maintaining intellectual integrity
  • Current trends: the modern scientific landscape emphasizes career metrics and narrow specialization over the pursuit of broad knowledge
  • Interdisciplinary vision: the pursuit of truth requires crossing disciplinary boundaries to address massive societal challenges like artificial intelligence and climate change
  • Professional incentives: current academic structures prioritize publication volume and reputation over the fundamental relationship between humans and the natural world

During the fall semester of 2025, I taught a graduate seminar entitled “Darwinian Thought and Society.” While teaching should always derive from generosity and a desire to share knowledge, my motivations were partly selfish. The course was an opportunity for me to re-engage with Darwin’s foundational ideas in the company of some of the brightest junior scientists that I’ve ever come across.

The conversations around the first book we read, Darwin’s 1859 “On the Origin of Species,” were mostly familiar ones. We often discussed his use of evidence and his voluminous knowledge of natural history. But other features stood out. For example, the manner in which the book delivered its theoretical argument is unlike what scientific opuses do today. The book contains no equations and only a single figure, a diagram often described as the world’s first phylogenetic tree. It contains no detailed experimental design. There are no statistical methods, or power calculations. Yet the ideas in it are among the most radical and dangerous in the Western canon. In 2026, science has been forced into a deep reflection phase with regard to its present and its future. Revisiting Darwin offers useful lessons for the terrain we now occupy.

One of the revelations from the course was something that I already knew but had not fully absorbed: that the most compelling aspect of Darwin’s work is the manner in which he arranged, staged, and presented the evidence. Darwin did not persuade with numbers, strict axioms, or formalisms. He built worlds for the reader to inhabit, such that our doubts collapse under the steady pressure of his observations, stitched together through turns-of-phrase. In this sense, “On the Origin of Species” — arguably the most important scientific treatise since Isaac Newton’s “The Mathematical Principles of Natural Philosophy” — can be responsibly described as a work of prose and science communication. And his broader corpus is a gift because it encourages conversation rather than fruitless provocation and allows us to discuss, with clarity, even his most controversial ideas.

Science has been forced into a deep reflection phase with regard to its present and its future. Revisiting Darwin offers useful lessons for the terrain we now occupy.

On the first day of class, I joked with students that I would play the role of their politically conservative uncle. That is, there would be no trigger warnings and none of the cushioning that has become standard in college courses that include exposure to ideas and readings with offensive language or content. I told them that we would read Darwin’s books as they were written and try to understand them, and if they didn’t like that, to enroll in a different course. The larger lesson was simple: To study a complex world, you must read difficult material and learn to interpret it with rigor and empathy.

I was priming the class for Darwin’s views on race and gender, ideas that complicate many of our largely positive opinions of him (mine included). Some of my selective memory, which demotes his problematic takes, has support: There is a literature on how progressive he was compared to scientists like his cousin Francis Galton, who coined the term “eugenics” in 1883. But reading Darwin’s 1871 book “The Descent of Man” in a classroom with several young women from around the world softened my rigid stance that the right response to backward takes is to simply get over them. I still believe that refusing to read or interpret such work is unscholarly. But I also came to admit something I had been too eager to brush aside: Even when we consider historical context, there is still something painful about reading a giant of science describe human differences in the language of hierarchy, rank, and levels of civilization.

More generally, the experience taught me that science education is not merely the transfer of correct ideas from one generation to the next. It is also an apprenticeship in judgment. It is where students learn how to handle brilliance contaminated by prejudice, how to read carefully without worship, and how to separate historical importance from moral innocence. We do not protect science by pretending its heroes were unassailable. We protect it by showing that truth-seeking is a long walk on rugged terrain.

In casual conversations, groups of friends and I have conducted the thought experiment of what Darwin would be like if he were around today. Usually, our results highlight some of his peculiarities: He was a naturalist but only received formal education in medicine (studies that he did not complete) and theology. He was wealthy and well connected, but was not an exceptional student, and famously lacked the mathematical proclivities of some of his contemporaries. But there is a more provocative dimension to the thought-exercise. I contend that 2026’s Darwin would carry the same savant-like obsession with nature but would apply it to the most complex social questions of today. His interests would surely transcend natural history and theories of evolution. He would care about the misinformation crisis, climate science, and have opinions about how to live in a world being upended by artificial intelligence and threats to democracy. He did not care much for disciplinary boundaries in his own time, and he would not care for them now. His computer desktop would have dozens of folders, some with machine-learning papers, others full of ornithology monographs. And he’d read them all.

 What I’m offering may sound obvious to some, but the multiplicity that I discovered in Darwin is far from what I see encouraged in the scientists of today. In my view, scientific progress can be summarized as a race for prizes and the ability to gamify metrics, just as much as any other ambition. We build reputations in idea spaces often dominated by factions. We are penalized for pivoting into new areas of inquiry. We flood the market with ideas, hoping one of them catches fire. Surely, a love for the natural world lives in there somewhere, but the incentive structure has turned many of us into human large-language models, where we sometimes do not care about what we are measuring or why, as long as it lands in the pages of a prestigious journal. In this respect at least, I can’t help but lament at how far we have fallen from the days of Darwin, even if science has improved in many aspects (for example, it is far more diverse and inclusive than in his day). For Darwin at least, science was a conversation between humans and nature, more than it was an industry. I wish that were true today, for all of us.

Because I offer critiques of the science status quo, I am often characterized as something of a cowboy (charitably) or a gadfly (a label I resent). But what I aspire to be, more than anything, is an intellectual child of Charles Darwin. By this I do not mean a disciple of every belief he held, nor a romantic devotee of his era. I mean someone who believes that science is bigger than its rituals, that it should be hospitable to unusual minds, and that difficult truths must be faced directly.

This article was originally published on Undark. Read the original article.

Read the whole story
bogorad
4 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories