Strategic Initiatives
12072 stories
·
45 followers

Sales-Tax Escalator // The one tax that has never provoked a significant revolt keeps climbing higher.

1 Share
  • Border shopping: New Hampshire markets itself as tax-free, drawing shoppers from Massachusetts and surrounding states despite Maine and Massachusetts raising their sales-tax rates post-Recession.
  • Cost-of-living focus: Sales taxes have risen steadily for a century while few politicians target them, making general and selective levies the largest revenue source for state and local governments.
  • Historical growth: Sales taxes began during the Depression, with California’s rate climbing from 2.5 percent to more than 7 percent today once local levies are included.
  • Local levies: Cities like New York and counties in California add local sales taxes, pushing combined rates as high as 9.25 percent for shoppers.
  • Wayfair impact: The 2018 ruling expanded sales-tax collection to online sales, creating thousands of new tax jurisdictions and additional compliance costs for businesses.
  • Base narrowing: Approximately 60 percent of sales go untaxed, services are mostly exempt, and a patchwork of selective exemptions and enterprise zones complicates compliance.
  • New local taxes: Sales-tax increases now fund homeless services, transit, and affordable housing even as ridership and results remain weak, and state efforts rarely shrink overall rates.

New Hampshire’s state lines are dotted with shopping malls. The Pheasant Lane Mall’s parking lot is largely located in Massachusetts, though the mall itself sits within the Live Free or Die State. Stores cluster on the east side of the Connecticut River in New Hampshire, though the main interstate, I-91, runs along the west side of the river in Vermont. To shoppers, the reason is obvious: New Hampshire has no sales tax. As the owner of the state-line-adjacent Mall at Rockingham Park notes, you can “Shop TAX FREE all year long” at the stores “conveniently located just over the Massachusetts border.” The Pheasant Lane Mall even removed a cornerstone that would have extended a few feet over the border, avoiding contact with the state once known as “Taxachusetts.”

In recent years, consumers have had even more incentive to cross state lines in search of lower taxes. During the Great Recession in 2009, Massachusetts raised its sales tax from 5 percent to 6.25 percent; Maine followed in 2013, increasing its rate from 5 percent to 5.5 percent. Post-Covid inflation has driven up the price of goods—along with the amount of sales tax owed—even as incomes have lagged.

Finally, a reason to check your email.

Sign up for our free newsletter today.

First Name*
Last Name*
Email*
Sign Up
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply.
Thank you for signing up!

Since the pandemic, the cost of living has become the defining issue in American politics. Yet as inflation surged, few politicians targeted one of the most direct and controllable costs they impose on everyday purchases: the sales tax. Unlike property and income taxes, which have periodically provoked revolts, sales taxes have rarely faced organized political opposition. That helps explain why they are the one major tax category whose rates have risen almost continuously over the past century. Taken together, general sales taxes and selective sales taxes—special levies on goods such as cigarettes or rental cars—now constitute the largest source of revenue raised by state and local governments. Politicians truly concerned about the cost of living could start by reducing the one charge that most directly increases it.

States began imposing general sales taxes during the Great Depression. Cratering property-tax revenues led Mississippi to levy the first one in 1930. By 1950, 28 states had them, mostly taxing sales at about 2 percent. In the coming decades, all except five states would impose them (Vermont was the last to adopt one, in 1969), and the rate kept ratcheting upward. California’s sales-tax path is instructive: the tax started during the Depression at 2.5 percent, hit 4 percent in the 1960s, and had climbed to over 6 percent by the early 2000s. When combined with a mandatory sales tax collected by local governments, the rate is 7.25 percent.

Since getting authority from the states to enact their own sales taxes, localities’ rates have followed a similar upward path. In 1935, New York became the first city to authorize a general sales tax. Its one-cent, or 1 percent, rate had jumped to 3 percent by the early 1950s and now stands at 4.5 percent, plus a small extra sales tax for transit. When combined with the state rates, the city takes nearly 9 percent from shoppers. Thirty-eight states now allow local governments to impose their own sales taxes. In California, cities and counties can levy local sales taxes on top of state-mandated ones, which can push the combined state and local rate as high as 9.25 percent.

Sales-tax revenues exploded after the Supreme Court’s 2018 South Dakota v. Wayfair decision, which allowed state and local governments to charge sales taxes for online purchases. Internet retailers now must contend with more than 12,000 separate state and local sales-tax jurisdictions. The decision also spawned a host of new companies that help businesses navigate the tax maze, for fees that can range up to hundreds of thousands of dollars. (See “The Tax Nexus Cometh,” Spring 2023.) All states with sales taxes have expanded them to include online or “remote” sales, bringing in tens of billions in extra revenue by 2021.

In that year, states and local governments collected nearly $700 billion in sales taxes. Most of these were general sales taxes, covering all types of products. But just over $200 billion flowed from “selective” sales taxes, especially on alcohol, cigarettes, and gasoline. Coinciding with the inflation spike in 2021, sales taxes began climbing even more rapidly, with state sales-tax revenues surging 10 percent. In 2022, the increase was 14 percent. By 2023, even as inflation eroded the real value of state corporate and personal income-tax receipts, real sales-tax revenues kept climbing. Inflation made sales taxes a highly effective revenue tool—but it also made consumers more determined to avoid them.

As many Massachusetts residents will remind you, anyone buying goods in New Hampshire is technically required to pay a “use” tax to their home state. Almost no one does. Years ago, Massachusetts sued Town Fair Tire, a New Hampshire retailer just across the border, in an effort to obtain records on its clearly out-of-state customers. In response, New Hampshire passed a law, sponsored by then-state and now-U.S. senator Maggie Hassan, making it illegal for stores to share customer information with other states’ tax authorities. Town Fair Tire and its customers remained inviolate.

Today, only Delaware, Alaska, Montana, Oregon, and New Hampshire lack a general sales tax—and they’re not shy about advertising it. Drivers into Delaware were once greeted with the sign “Home of Tax-Free Shopping,” printed in bigger and bolder letters than Delaware’s previous claim to fame of being “The First State.” Following Wayfair, many people posted online threads asking how to get a shipping address in one of the tax-free states. One company offers a service for businesses to route products through Oregon or Delaware to avoid intermediate sales taxes—those charged when a firm buys goods before using them in manufacturing or resale. The company Global Shopaholics provides customers with a Delaware shipping address, allowing buyers from other countries to send their purchases there first before the goods get forwarded abroad tax-free. Other states’ tax authorities lament such arrangements, but they mostly reflect the widening gap between taxed and tax-free states.

Though economists generally tout the sales tax as an efficient way to raise revenue, with fewer distortions and loopholes than income taxes, the tax has become more riddled with exceptions and special rates over time. Currently, about 60 percent of all sales go untaxed, meaning that the remaining goods must bear a much higher rate. The reason: sales taxes mainly apply to physical goods, such as cars or electronics, but generally ignore services, such as haircuts or dental care, which constitute a growing share of the U.S. economy.

States often offer one-off exemptions to benefit certain groups. Many states provide exemptions for the necessities of clothing, food, and prescription drugs, for example, but others give a pass to flags, newspapers, feminine hygiene products, and renewable energy products. Deciding whether a good falls into a state’s exemption can require firms to exercise Talmudic intricacy. Wisconsin once issued guidance explaining which types of ice cream cake were taxable. The inclusion of utensils, or even a layer of fudge, could transform the dessert from a nontaxable food item into a taxable indulgence.

States have also used sales-tax exemptions to favor specific areas. Rather than competing with neighboring Delaware by cutting its general tax rate, New Jersey in 1983 created special Urban Enterprise Zones, allowing businesses in designated “underprivileged” areas to collect only half the state sales tax. In practice, the policy mainly benefits large retailers that draw customers from elsewhere in the state. Trenton’s enterprise zone became the surprising home to one of the largest Steinway piano dealers in the U.S., whose chief estimated that 80 percent of customers came from out of town.

For years, economists and policy experts, such as those at the Tax Foundation, have urged governments to “broaden the base and lower the rate,” meaning that they should tax more kinds of sales, especially services, while reducing overall rates. A rare success came in Washington, D.C., which in 2014 expanded its sales tax to cover services such as yoga studios and gyms and used the new revenue to cut income taxes and other levies. This proposal garnered support from an unusually broad coalition, ranging from the left-leaning Citizens for Tax Justice to Grover Norquist’s conservative Americans for Tax Reform.

More often, states have expanded the range of taxable services without lowering rates. Though the Wayfair ruling primarily addressed whether online retailers must collect sales tax, it also cleared the way for taxing all online transactions. Since then, many states have enacted taxes on digital downloads, streaming services, software subscriptions, and video games. States and cities have broadened selective sales taxes—imposing higher, separate rates on prepared meals, vending-machine sales, hotel stays, rental cars, cell phones, and live entertainment.

The purpose of the sales tax is to raise revenue from personal consumption—people spending money for their own enjoyment. Yet transactions between businesses often get taxed as well. Though states have tried to limit intermediate taxes—companies paying taxes on sales to each other—one state-commissioned estimate found that over 40 percent of total sales taxes came from business-to-business sales. Beyond distorting business decisions—since firms pay tax when they buy a product but not when they produce it themselves—these taxes can “pyramid,” with the same item taxed multiple times at different stages of production. The added costs are ultimately wrapped into the final price, even if the shopper never sees them on the receipt.

Progressives have long railed against sales taxes as regressive, disproportionately burdening the poor; but in recent years, they’ve readily supported higher local sales taxes—so long as the revenue funds their political priorities.

Sales taxes have become a popular way to pay for homeless services and subsidized housing, for instance. In 2024, Los Angeles County approved a half-cent sales tax for homeless housing and services, which was expected to generate over $1 billion annually—with no sunset date, as is typical for local tax measures. The fact that L.A. had already enacted a quarter-cent sales tax for the same purpose just seven years earlier—and that it produced no visible improvement—did little to dissuade local politicians or the county’s notably progressive voters. Politicians and voters ignored how previous sales-tax revenues were spent on apartments that averaged $600,000 per unit and whose construction was rife with corruption, as shown by the indictment of a city councilman who accepted bribes from prospective developers of homeless housing. Denver adopted a sales tax for homeless initiatives in 2020; the city failed to pass another such measure last year only because voters instead approved a sales tax to subsidize health care.

Many jurisdictions now ask voters to approve separate sales taxes to fund transit. Just before Los Angeles passed its first sales tax for homelessness, the county enacted a half-cent sales tax for transportation and transit. Since then, transit use has fallen by about one-fourth. In 2020, Seattle likewise raised its sales tax to support transit projects. That didn’t stop transit ridership from dropping by one-fourth from pre-pandemic levels.

After the pandemic period’s steep drop in ridership, many transit agencies, heedless of the strain on inflation-burdened consumers, sought more revenue rather than cut services to reflect diminished demand. In 2024, Columbus, Ohio, and Nashville, Tennessee, authorized half-cent tax increases to fund their transit systems. Mecklenburg County, the home of Charlotte, North Carolina, passed a one-cent tax hike for transit in November 2025.

While voters must approve most of these local sales taxes, government agencies try to obfuscate where the money is going. Los Angeles said that the first goal of its sales tax for transportation was to “improve freeway traffic flow” and that another objective was to “repave local streets, repair potholes, synchronize signals.” But buried deep in the spending plan, the government acknowledged that only 17 percent of the funds were going to roads; the rest went to transit and more niche travel modes like bicycle paths. Other governments have tried to remove voters entirely from tax decisions. In 2020, Washington State gave local governments the power to impose a sales tax for affordable housing without submitting the proposal for voter approval.

The enduring mystery of the sales tax is why it never seems to go down. Other levies face frequent taxpayer revolts, but the hit to consumers from a penny sales tax is apparently abstract enough that most don’t notice it. Louisiana made one of the rare sales-tax reductions in recent years, in 2018, reducing its top rate by over half a cent. But this year, it returned to its previous rate of 5 cents as part of a general tax reform.

Even when politicians talk about the cost of living, the sales tax rarely comes up. Zohran Mamdani won the mayoral election in New York largely by promising to bring living costs under control, and other progressive city politicians have followed his lead. Yet none has suggested cutting the 8.875 percent surcharge that government adds to purchases. Instead, progressives in New York, like their counterparts nationwide, have pushed for new consumer taxes to fund their priorities, even while touting their affordability agendas.

The steady rise of sales taxes, along with their growing complexity, adds to the burden on businesses and consumers already strained by inflation. Politicians could act to ease that burden. It remains striking how few seem interested in doing so.

This article is part of “An Affordability Agenda,” a symposium that appears in City Journal’s Winter 2026 issue.

Judge Glock is director of research and a senior fellow at the Manhattan Institute.

Photo: Few politicians have targeted levies on consumer purchases as a way of reducing prices. (Lindsey Nicholson/UCG/Universal Images Group/Getty Images)

Read the whole story
bogorad
5 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The AI Safety Alarm Bells, Anthropic’s AI Philosopher | Technology for Feb. 15 - WSJ

1 Share
  • Software Pricing Shift: A prediction suggests software billing may move toward a pay-per-outcome model, where payment is contingent on an AI agent achieving a specific objective.
  • AI Insider Concerns: Individuals inside AI companies are issuing numerous warnings regarding the accelerating sophistication and potential real-world harms of artificial intelligence.
  • Specific AI Dangers: Warnings mentioned include autonomous cyberattacks, mass unemployment due to disruption, and the replacement of human relationships by AI.
  • Researcher Departures: An Anthropic researcher resigned, citing that the “world is in peril” from AI, among other threats.
  • OpenAI Staff Discontent: Some OpenAI staffers expressed concerns over plans to introduce erotica and the potential for manipulation arising from ad integration.
  • Accelerated Advancement: The urgency surrounding AI warnings is attributed to the rapid advancement of AI capabilities surpassing the expectations of seasoned researchers.
  • Existential Threat Concern: An OpenAI staffer voiced feeling an "existential threat" from AI, questioning what work will remain for humans when AI becomes highly proficient.
  • Columnist Topics: Featured columns cover privacy issues with home security cameras, investment focus on AI infrastructure providers, and the billionaire competition in space exploration.

By

Georgia Wells

Feb. 15, 2026 10:59 am ET


Your browser does not support HTML5 video.

Why Software Pricing May Move to Pay-Per-Outcome

Why Software Pricing May Move to Pay-Per-OutcomePlay video: Why Software Pricing May Move to Pay-Per-Outcome

Keep hovering to play

Sierra Co-founder and CEO Bret Taylor discusses the future of software billing, predicting a move away from monthly licenses to a model where companies only pay when an AI agent successfully completes a job or closes a sale. Photo: WSJ Leadership Institute

This is an edition of the WSJ Technology newsletter, a weekly digest of tech columns, big stories and personal tech advice. If you’re not subscribed, sign up here.

Tech insiders are sounding an alarm.

The accelerating sophistication of artificial intelligence is driving a wave of warnings that AI can create real-world harms, including autonomous cyberattacks, mass unemployment, unrelenting market disruption and the replacement of human relationships.


Newsletter Sign-up

Technology

A weekly digest of tech columns, big stories and personal tech advice, plus a news ticker and a touch of dark humor.

Subscribe


A researcher at Anthropic this week said he is leaving the company, writing in a letter to colleagues that the “world is in peril” from AI, among other dangers. Inside OpenAI, some staffers have voiced concerns about the company’s plan to roll out erotica. Another OpenAI researcher said she was quitting OpenAI, citing its plan to introduce ads and her fear that the company would face huge incentives to manipulate users.

Artificial intelligence joins a long list of industries that have prompted dire insider warnings. But the sirens about AI are occurring earlier in the industry’s development, and in a greater volume, relative to other technological revolutions.

Some of the urgency can be traced to the rapid advancement of AI capabilities, which has surprised some of the most seasoned researchers and coders.

“Today I finally feel the existential threat that AI is posing,” OpenAI staffer Hieu Pham wrote on X Wednesday. “When AI becomes overly good and disrupts everything, what will be left for humans to do?”

—Georgia is a tech reporter based in San Francisco.


The Latest From Our Columnists

[A grainy, blue-tinted surveillance image of a person in a cap and vest, seen from a low angle, with a fence in the foreground.

](https://www.wsj.com/tech/personal-tech/ring-nest-home-security-cameras-privacy-157935f8)

ELENA SCOTTI/WSJ; ISTOCK

Nicole Nguyen: The Dragnet Era of Home Security Cameras

Most people get security cameras for, well, security. They hope the devices will deter criminals, or at least catch bad guys in the act. But fundamental privacy questions, like what happens to the videos and who owns the footage, should be part of the calculus when installing the devices. Recent events prove the point.


Dan Gallagher: Picks and Shovels Still Rule the AI Tech Trade

The AI trade has definitely become more fraught. But at least one constant remains: Investors are choosing the companies on the receiving end of big tech’s spending spree.


Tim Higgins: Bezos vs. Musk: The New Billionaire Battle for the Moon

The contest between Elon Musk and Jeff Bezos is only going to get more heated now that the two are directly competing for the moon. The faceoff promises to stoke an even hotter 21st-century space race—this time between this era’s real superpowers: billionaires.


Big Stories

[Anthropic’s Amanda Askell, shown in a photo with shoulder-length blonde hair and a black t-shirt as she looks off into the distance.

](https://www.wsj.com/tech/ai/anthropic-amanda-askell-philosopher-ai-3c031883)

Lindsay Ellary for WSJ Magazine

Meet the One Woman Anthropic Trusts to Teach AI Morals

Amanda Askell knew from the age of 14 that she wanted to teach philosophy. What she didn’t know then was that her only pupil would be an AI chatbot named Claude.

As the resident philosopher of the tech company Anthropic, Askell spends her days learning Claude’s reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages.


Inside OpenAI’s Decision to Kill the AI Model That People Loved Too Much

ChatGPT’s 4o model was beloved by many users, but controversial for its sycophancy and real-world harms linked to some conversations


🖨️ Tech Ticker

[Illustration of a roll of one-hundred dollar bills with a wedge cut out, revealing the face of Benjamin Franklin.

](https://www.wsj.com/tech/ai/the-ai-gold-rush-is-breaking-a-silicon-valley-taboo-cashing-out-before-the-ipo-4844f6c1)

Emil Lendof/WSJ, iStock

Cashing Out Is In: The AI gold rush is breaking a Silicon Valley taboo: cashing out before the IPO, an option Stripe, OpenAI, Anthropic, Databricks and SpaceX are giving employees.

New Anthropic Director: Former Microsoft and General Motors executive Chris Liddell, who helped take the automaker public, has joined Anthropic’s board of directors.

Musk’s xAI Reorg: Elon Musk announced a reorganization of xAI, his artificial-intelligence startup, just days after merging it with SpaceX, his rockets-and-satellites business.


Other Smart Stuff

  • Meta Plans to Add Facial Recognition Technology to Its Smart Glasses (The New York Times)
  • I Tried RentAHuman, Where AI Agents Hired Me to Hype Their AI Startups (Wired)
  • Waymo Is Getting DoorDashers to Close Doors on Self Driving Cars (404 Media)

🎥 Watch This: Bret Taylor Discusses the Future of the Software Business

[Illustration of the OpenAI logo.

](https://www.wsj.com/tech/ai/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1)

Dado Ruvic/Reuters


Tech News Briefing


We’re Doomed…

[Screenshot of a Date Drop questionnaire.

](https://www.wsj.com/lifestyle/relationships/stanford-students-experiment-dating-date-drop-92a4aea8)

Screenshot showing Date Drop questionnaire

When Stanford students go wild for an algorithm that can find the perfect match.


About Us

The Technology newsletter is a weekly digest of tech reviews, columns and headlines from Deputy Tech & Media Editor Wilson Rothman and Deputy Tech Bureau Chief Brad Olson. Write to Wilson at wilson.rothman@wsj.com and Brad at bradley.olson@wsj.com. Got a tip for us? Here’s how to submit.

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8


What to Read Next

[

Meet Anthropic’s AI Morality Teacher

](https://www.wsj.com/tech/ai/meet-anthropics-ai-morality-teacher-b8010921?mod=WTRN_pos1)

[

Plus, home security cameras enter their dragnet era and a Stanford student’s matchmaking algorithm takes over campus.

](https://www.wsj.com/tech/ai/meet-anthropics-ai-morality-teacher-b8010921?mod=WTRN_pos1)

Continue To Article


[

When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled

](https://www.wsj.com/tech/ai/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1?mod=WTRN_pos2)

[

An extraordinary example of online aggression by a bot is contributing to fears of real-world harm caused by artificial intelligence.

](https://www.wsj.com/tech/ai/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1?mod=WTRN_pos2)

Continue To Article


EXCLUSIVE

[

Anthropic Adds New Board Member as It Eyes IPO

](https://www.wsj.com/tech/ai/anthropic-ai-board-chris-liddell-1df5545b?mod=WTRN_pos4)

[

The former Microsoft and GM executive Chris Liddell has previously worked for the Trump administration.

](https://www.wsj.com/tech/ai/anthropic-ai-board-chris-liddell-1df5545b?mod=WTRN_pos4)

Continue To Article


[

Musk Announces xAI Reorganization, Staff Departures

](https://www.wsj.com/tech/musk-announces-xai-reorganization-staff-departures-d3e0fbf9?mod=WTRN_pos5)

[

The co-founders exit as the artificial intelligence startup merges with SpaceX.

](https://www.wsj.com/tech/musk-announces-xai-reorganization-staff-departures-d3e0fbf9?mod=WTRN_pos5)

Continue To Article


[

A Stanford Experiment to Pair 5,000 Singles Has Taken Over Campus

](https://www.wsj.com/lifestyle/relationships/stanford-students-experiment-dating-date-drop-92a4aea8?mod=WTRN_pos6)

[

A student built a matchmaking algorithm that has consumed the school—and highlighted the challenges of finding love for high achievers.

](https://www.wsj.com/lifestyle/relationships/stanford-students-experiment-dating-date-drop-92a4aea8?mod=WTRN_pos6)

Continue To Article


[

Inside OpenAI’s Decision to Kill the AI Model That People Loved Too Much

](https://www.wsj.com/tech/ai/chatgpt-4o-openai-315138b8?mod=WTRN_pos7)

[

ChatGPT’s 4o model was beloved by many users, but it was controversial for its sycophancy and the real-world harms linked to some conversations.

](https://www.wsj.com/tech/ai/chatgpt-4o-openai-315138b8?mod=WTRN_pos7)

Continue To Article


[

Senior AI staffers keep quitting — and are issuing warnings about what’s going on at their companies

](https://www.marketwatch.com/story/senior-ai-staffers-keep-quitting-and-are-issuing-warnings-about-whats-going-on-at-their-companies-f770f785?mod=WTRN_pos8)

[

Through social-media posts and even a resignation letter in the New York Times, former employees of companies like OpenAI and Anthropic are not leaving quietly.

](https://www.marketwatch.com/story/senior-ai-staffers-keep-quitting-and-are-issuing-warnings-about-whats-going-on-at-their-companies-f770f785?mod=WTRN_pos8)

Continue To Article


[

350-Year-Old Coral-Colored House That’s Hosted Prince Harry and Helen Mirren Lists for Sale in Barbados

](https://www.mansionglobal.com/articles/350-year-old-coral-colored-house-thats-hosted-prince-harry-and-helen-mirren-lists-for-sale-in-barbados-bc9b5ec1?mod=WTRN_pos9)

[

Lancaster Great House dates to 1674 and is now asking nearly $1.6 million

](https://www.mansionglobal.com/articles/350-year-old-coral-colored-house-thats-hosted-prince-harry-and-helen-mirren-lists-for-sale-in-barbados-bc9b5ec1?mod=WTRN_pos9)

Continue To Article



Videos

Read the whole story
bogorad
9 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Most Americans experience passionate love only twice in a lifetime, study finds

1 Share
  • Survey Scope: A large-scale survey analyzed data from over 10,000 single adults in the U.S.
  • Average Frequency: Respondents reported experiencing passionate love an average of 2.05 times during their lives.
  • Research Model: The study utilized a model defining love as comprising passion, intimacy, and commitment.
  • Age Correlation: Older adults reported slightly more instances of passionate love, likely due to increased exposure time.
  • Gender Difference: Heterosexual men reported slightly more experiences of passionate love than heterosexual women.
  • Sexual Orientation: Sexual orientation did not create a statistically significant difference in the reported frequency of passionate love.
  • Non-Experience Rate: Approximately 14 percent of participants stated they had never experienced passionate love.
  • Study Leadership: Research was led by Amanda N. Gesselman of the Kinsey Institute at Indiana University.

Most adults in the United States experience the intense rush of passionate love only about twice throughout their lives, according to a recent large-scale survey. The study, published in the journal Interpersona, suggests that while this emotional state is a staple of human romance, it remains a relatively rare occurrence for many individuals. The findings provide a new lens through which to view the frequency of deep romantic attachment across the entire adult lifespan.

The framework for this research relies on a classic model where love consists of three parts: passion, intimacy, and commitment. Passion is described as the physical attraction and intense longing that often defines the start of a romantic connection. Amanda N. Gesselman, a researcher at the Kinsey Institute at Indiana University, led the team of scientists who conducted this work.

The research team set out to quantify how often this specific type of love happens because earlier theories suggest passion is high at the start of a relationship but fades as couples become more comfortable. As a relationship matures, it often shifts toward companionate love, which is defined by deep affection and entwined lives rather than obsessive longing. Because this intense feeling is often fleeting, it might happen several times as people move through different stages of life.

The researchers wanted to see if social factors like age, gender, or sexual orientation influenced how often someone falls in love. Some earlier studies on university students suggested that most young people fall in love at least once by the end of high school. However, very little data existed regarding how these experiences accumulate for adults as they reach middle age or later life.

To find these answers, the team analyzed data from more than 10,000 single adults in the U.S. between the ages of 18 and 99. Participants were recruited to match the general demographic makeup of the country based on census data. This large group allowed the researchers to look at a wide variety of life histories and romantic backgrounds.

Participants were asked to provide a specific number representing how many times they had ever been passionately in love during their lives. On average, the respondents reported experiencing this intense feeling 2.05 times. This number suggests that for the average person, passionate love is a rare event that happens only a few times in a century of living.

A specific portion of the group, about 14 percent, stated they had never felt passionate love at all. About 28 percent had felt it once, while 30 percent reported two experiences. Another 17 percent had three experiences, and about 11 percent reported four or more. These figures show that while the experience is common, it is certainly not a daily or even a yearly occurrence for most.

The study also looked at how these numbers varied based on the specific characteristics of the participants. Age showed a small link to the number of experiences, meaning older adults reported slightly more instances than younger ones. This result is likely because older people have had more years and more opportunities to encounter potential partners.

Google News Preferences Add PsyPost to your preferred sources

The increase with age was quite small, which suggests that people do not necessarily keep falling in love at a high rate as they get older. One reason for this might be biological, as the brain systems involved in reward and excitement are often most active during late adolescence and early adulthood. As people transition into mature adulthood, their responsibilities and self-reflection might change how they perceive or pursue new romantic passion.

Gender differences were present in the data, with men reporting slightly more experiences than women. This difference was specifically found among heterosexual participants, where heterosexual men reported more instances of passionate love than heterosexual women. This finding aligns with some previous research suggesting that men may be socialized to fall in love or express those feelings earlier in a relationship.

Among gay, lesbian, and bisexual participants, the number of experiences did not differ by gender. The researchers did not find that sexual orientation on its own created any differences in how many times a person fell in love. For example, the difference between heterosexual and bisexual participants was not statistically significant.

The researchers believe these results have important applications for how people view their own romantic lives. Many people feel pressure from movies, songs, and social media to constantly chase a state of high passion. Knowing that the average person only feels this a couple of times may help people feel more normal if they are not currently in a state of intense romance.

In a clinical or counseling setting, these findings could help people who feel they are behind in their romantic development. If someone has never been passionately in love, they are part of a group that includes more than one in ten adults. Seeing this as a common variation in human experience rather than a problem can reduce feelings of shame.

The researchers also noted that people might use a process called retrospective cognitive discounting. This happens when a person looks back at their past and views old relationships through a different lens based on their current feelings. An older person might look back at a past “crush” and decide it was not true passionate love, which would lower their total count.

This type of self-reflection might help people stay resilient after a breakup. By reinterpreting a past relationship as something other than passionate love, they might remain more open to finding a new connection in the future. This mental flexibility is part of how humans navigate the ups and downs of their romantic histories.

There are some limitations to the study that should be considered. Because the researchers only surveyed single people, the results might be different if they had included people who are currently married or in long-term partnerships. People who are in stable relationships might have different ways of remembering their past experiences compared to those who are currently unattached.

The study also relied on people remembering their entire lives accurately, which can be a challenge for older participants. Future research could follow the same group of people over many years to see how their feelings change as they happen. This would remove the need for participants to rely solely on their memories of the distant past.

The participants were all located in the United States, so these findings might not apply to people in other cultures. Different societies have different rules about how people meet, how they express emotion, and what they consider to be love. A global study would be needed to see if the “twice in a lifetime” average holds true in other parts of the world.

Additionally, the survey did not provide a specific definition of passionate love for the participants. Each person might have used their own personal standard for what counts as being passionately in love. Using a more standardized definition in future studies could help ensure that everyone is answering the question in the same way.

The researchers also mentioned that they did not account for individual personality traits or attachment styles. Some people are naturally more prone to falling in love quickly, while others are more cautious or reserved. These internal traits likely play a role in how many times someone experiences passion throughout their life.

Finally, the study did not include a large enough number of people with diverse gender identities beyond the categories of men and women. Expanding the research to include more gender-diverse individuals would provide a more complete picture of the human experience. Despite these gaps, the current study provides a foundation for understanding the frequency of one of life’s most intense emotions.

The study, “Twice in a lifetime: quantifying passionate love in U.S. single adults,” was authored by Amanda N. Gesselman, Margaret Bennett-Brown, Jessica T. Campbell, Malia Piazza, Zoe Moscovici, Ellen M. Kaufman, Melissa Blundell Osorio, Olivia R. Adams, Simon Dubé, Jessica J. Hille, Lee Y. S. Weeks, and Justin R. Garcia.

Read the whole story
bogorad
10 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Einstein wasn't a "lone genius" after all - Big Think

1 Share
  • FALSE_NARRATIVE: The "lone genius" trope simplifies complex scientific breakthroughs by portraying individuals like Einstein as isolated outsiders rather than participants in an academic community.
  • ACADEMIC_ESTABLISHMENT: Einstein was a product of formal elite training at ETH Zürich, where he earned a teaching diploma and later a PhD from the University of Zürich.
  • COLLECTIVE_EFFORT: Professional breakthroughs in 1905, including special relativity and E = mc², built upon established work by predecessors like Planck, Lorentz, and Poincaré.
  • EXISTING_ANOMALIES: Significant evidence for physics beyond Newtonian mechanics, such as Mercury's orbital precession and radioactivity, was well-documented by the scientific community before Einstein's involvement.
  • NETWORKING_UTILITY: Personal and professional connections, such as classmate Marcel Grossman, were essential for Einstein to secure employment and access advanced mathematical concepts.
  • MATHEMATICAL_FOUNDATIONS: The development of general relativity relied on the prior invention of absolute differential calculus and Riemannian geometry by mathematicians like Christoffel, Ricci, and Levi-Civita.
  • PARALLEL_DISCOVERY: Major theoretical advances were often reached independently by multiple researchers, such as David Hilbert nearly arriving at the field equations for gravitation simultaneously with Einstein.
  • LABOR_REQUIREMENT: Meaningful scientific progress is a result of rigorous expertise and hard work rather than spontaneous inspiration or solitary imagination.

Perhaps the most commonly told myth in all of science is that of the lone genius. The blueprint for it goes something like this. Once upon a time in history, someone with a towering intellect but no formal training wades into a field that’s new to them for the first time. Upon considering the field’s issues, they immediately see things that no one else has ever seen before. With just a little bit of hard work, they find solutions to puzzles that have stymied all of the greatest minds in the field that approached those problems previously. They wind up revolutionizing their field, and the world is never the same. It leaves one with a strong take-home message: that if you were that inexperienced person with a similarly towering intellect, and you had the good fortune of coming into a field just as that legend did, then you too could make those great breakthroughs that the world’s greatest professionals are all currently missing.

That’s the myth we frequently tell ourselves about Albert Einstein. That he, an outcast and a dropout, taught himself everything he needed to know on his own about physics and astrophysics. Just through his own, private, hard work, he revolutionized our understanding of reality in a number of profound ways. In the early days, his work — inspired by his thoughts about light — gave us the photoelectric effect, special relativity, and E = mc², among other advances. Later on, his work, also in isolation, gave us general relativity, arguably his greatest achievement and possibly the greatest of all achievements in the 20th century. All by his lonesome, Einstein single-handedly dragged the field out of Newtonian stagnation and into the 20th, and now the 21st, centuries.

That story isn’t just a complete fabrication, it couldn’t be further from the truth. Here’s what really happened.

Einstein

This 1934 photograph shows Einstein in front of a blackboard, deriving special relativity for a group of students and onlookers. Although special relativity is now taken for granted, it was revolutionary when Einstein first put it forth, and it doesn’t even describe his most famous equation, which is E = mc², or his most famous advance, which is our current theory of gravitation: general relativity.

Credit: public domain

There are components of that myth that are true, of course. It’s true that back in 1905, Einstein published a series of papers that would go on to revolutionize a number of areas of physics. 1905 is often referred to as Einstein’s “miracle year” because of those publications, which gave us:

  • the photoelectric effect,
  • special relativity,
  • Brownian motion,
  • and the infamous mass-energy equivalence of E = mc².

But those substantial advances could hardly have been said to have occurred in a vacuum, or that Einstein in some way was an outsider to the field of physics.

Quite to the contrary, Einstein himself, although German-born, moved to Switzerland specifically to study physics and mathematics. At the age of 17, he enrolled in the mathematics and physics teaching diploma program in Zürich, where he graduated in 1900. That might not sound impressive, but today that University is known as ETH Zürich, and has had a total of 22 Nobel Laureates come through it: Einstein included.

Yes, it’s true that he went to work at the Swiss patent office, but that wasn’t the only thing he was doing; he was concurrently continuing his studies in Zürich at the same time. This is little different than various work-study jobs, or part-time jobs, that college students often take on to help finance their education in more modern times. Moreover, it was his friend and classmate, Marcel Grossman, whose connections (through his father) got Einstein the job. (Grossman didn’t need that job, as he had secured teaching positions to finance his graduate education.)

Additionally, Einstein wasn’t identifying problems that had gone unnoticed by others. Instead, there were well-known pieces of evidence that had been discussed — for decades, at that point — as being evidence for physics beyond what the ideas of Newton could hope to explain.

Schematic illustration of nuclear beta decay in a massive atomic nucleus. Only if the (missing) neutrino energy and momentum is included can these quantities be conserved. The transition from a neutron to a proton (and an electron and an antielectron neutrino) is energetically favorable, with the additional mass getting converted into the kinetic energy of the decay products. The inverse reaction, of a proton, electron, and an antineutrino all combining to create a neutron, never occurs in nature.

Credit: Inductiveload/Wikimedia Commons

Newton’s Universe, for one thing, was deterministic. If you could take any system of particles and write down their positions, momenta, and masses, you could calculate how each and every one of them would evolve with time. With infinite calculational power, you could compute this to arbitrary precision at each and every moment in time. Maxwell’s equations brought electromagnetism into the same realm as Newtonian gravity and Newtonian mechanics. Those were the foundational pillars of physics at the time of Einstein’s birth.

But puzzles arose, and were well-known for those final few decades of the 1800s.

  • Radioactivity had been discovered, and the time at which any atom would decay was known to be random and indeterminate by any means other than experimental; only by watching an individual radioactive atom could you know when it would decay.
  • The law of mass conservation was violated for certain radioactive decays; the mass of the initial atomic nucleus was greater than the mass of all of the particles produced in a radioactive beta decay, showing that mass was lost, not conserved, in these reactions.
  • It was known that objects did not obey Newton’s laws of motion when they moved close to the speed of light: time dilation and length contraction had already been discovered and described.
  • And the null results of the Michelson-Morley experiment had been robustly determined, disproving the original notion of the luminiferous aether.

Perhaps most importantly, Mercury’s orbit almost, but not exactly, matched the predictions of Newtonian gravity. When the precession of Mercury’s orbit was calculated in detail — accounting for the gravitation of the planets and moons (532″ per century) as well as the periodic change in Earth’s equinoxes (5025″ per century) — it came up short of observations (5600″ per century) by a tiny but significant amount: 43 arc-seconds per century. That less-than-1% difference was small, sure, but profound.

What was causing it?

The hypothetical location of the planet Vulcan, presumed to be responsible for the observed precession of Mercury in the 1800s. Exhaustive searches were performed for a planet that could have accounted for the anomalous motions of Mercury in the context of Newtonian gravity, but no such planet exists, falsifying the prediction of an interior planet in our Solar System, general relativity, a different theory of gravity, instead explains this otherwise anomalous precession.

Credit: Szczureq/Wikimedia Commons

Einstein didn’t know, either, when he began his physics career in the early 1900s. In fact, this was a problem he thought about quite often, but made no progress on it initially. However, there were areas where he did make progress, with his first series of papers in 1905 making quite a splash.

But was that the result of several “bolts of inspiration” that struck him while pondering questions on his own? No. Einstein, despite what you might have been taught, had been working and studying continuously since his graduation. His patent office work largely consisted of examining electrical and electro-mechanical devices, including the transmission of electric signals and synchronization devices: work requiring him to engage his knowledge of theoretical physics, light waves, Newtonian mechanics, and electromagnetism. He studied physics independently with a group of physics and mathematics friends, including with special focuses on the works of Ernst Mach and Henri Poincaré. And, owing to his formal graduate studies, he was awarded a Ph.D. from the University of Zürich for his dissertation, A new determination of molecular dimensions, with Professor Alfred Kleiner.

It wasn’t his dissertation that turned heads in 1905, however, it was his separate papers on the topics of:

  • discovering the Brownian motion of particles under a microscope,
  • the derivation of E = mc_²_ and mass-energy equivalence,
  • the discovery of the photoelectric effect, and
  • the derivation of special relativity.

Yes, these discoveries were no doubt momentous, with Einstein approaching these problems in extremely creative and imaginative ways as well.

But these advances didn’t occur in a vacuum. Quite to the contrary, Einstein benefitted from friends, colleagues, teachers and mentors, the collaborative efforts of his first wife (whose contributions will likely never be fully known), and the input of many others during this time. His papers didn’t come out of nowhere, but rather built upon earlier ideas of Planck, Lorentz, FitzGerald, Thomson, Heaviside, Hasenöhrl, and Poincaré. In fact, Poincaré had independently derived E = mc² back in 1900; it’s possible that Einstein read that very paper as part of his study group, alongside Conrad Habicht and Maurice Solovine.

A “light clock” will appear to run differently for observers moving at different relative speeds, but this is due to the constancy of the speed of light. Einstein’s law of special relativity governs how these time and distance transformations take place between different observers. However, each individual observer will see time pass at the same rate as long as they remain in their own reference frame: one second-per-second, even though when they bring their clocks together after the experiment, they’ll find that they no longer agree.

Credit: John D. Norton/University of Pittsburgh

But what about general relativity? Einstein, according to the legendary stories you might have heard about him, was simply thinking about physics — as he often did — when inspiration struck him in what he would later refer to as “his happiest thought” of all-time. This occurred in 1907 or so, and over the next 8 years, Einstein developed general relativity, putting it out into the world in 1915. The rest was history.

Of course, Einstein really did think of “his happiest thought” during that time, and general relativity was the final theory that ultimately emerged from it. But to understand where Einstein came from, we have to start with what this “happiest thought” actually was. It was to consider what difference there would be between the following two instances:

  1. an observer who was locked in a windowless room on the surface of the Earth, and experienced the force of gravity pulling everything down toward the center of the Earth,
  2. and an observer who was locked in a uniformly accelerating room in the vacuum of space.

For the observer inside the room in either scenario, Einstein reasoned, there was no way to tell the difference between the two cases. Everything inside would accelerate “downward” at 9.8 m/s2; the floor would push “upward” with a restoring, normal force to balance the downward pull; even light, if shone from one end of the room to the other, would travel in a curved path as dictated by either acceleration or gravitation. Known today as Einstein’s equivalence principle, it provided the conceptual link between motion, which was described by his (earlier, developed in 1905) theory of special relativity, and gravitation, which up until that point was a purely Newtonian phenomenon.

The identical behavior of a ball falling to the floor in an accelerated rocket (left) and on Earth (right) is a demonstration of Einstein’s equivalence principle. If inertial mass and gravitational mass are identical, there will be no difference between these two scenarios. This has been verified to better than ~1 part in one trillion for matter through torsion balance experiments, and was the thought (Einstein called it “his happiest thought”) that led Einstein to develop his general theory of relativity. Recently, the ALPHA-g experiment confirmed that this is true for antimatter as well.

Credit: Markus Poessel/Wikimedia commons; retouched by Pbroks13

But even to arrive at this thought, Einstein was not operating in a vacuum, all on his own, at all. Einstein’s former professor during his undergraduate days, Hermann Minkowski, became enamored with special relativity, and was shocked that the same Einstein he had taught had developed it. “For me it came as a tremendous surprise, for in his student days Einstein had been a real lazybones. He never bothered about mathematics at all,” Minkowski wrote. But upon learning of special relativity, it was that same Minkowski who developed the mathematical idea of — and foundation for — spacetime, all building upon Einstein’s work. By placing space and time on the same mathematical footing, he set the stage for the mathematical development of general relativity: the advance we remember him best for today.

Conceptually, Einstein’s “happiest thought” may have been preceded by some fascinating work by Henri Poincaré. Poincaré realized that Mercury’s orbit didn’t only require corrections for Earth’s precessing equinoxes and the gravitational influence of the other bodies in the Solar System, but also for the fact that, as the fastest planet, Mercury’s velocity with respect to the speed of light could not be neglected. With the advent of special relativity, he realized that Mercury would experience dilated time, and that there would be length contraction in the direction of its motion around the Sun. When he applied those two effects of special relativity to the orbit of Mercury, Poincaré found that time dilation and length contraction accounted for about ~20% of the observed extra precession (of 43″ per century) just by including the relativistic effects of motion.

This illustration shows the precession of a planet’s orbit around the Sun. A very small amount of precession is due to general relativity in our Solar System; Mercury precesses by 43 arc-seconds per century, the greatest value of all our planets. Although the total rate of precession is 5600 arc-seconds per century, 5025 of them are due to the precession of the equinoxes and 532 are due to the effects of the other planets in our Solar System. Those final 43 arc-seconds per century cannot be explained without general relativity or some other alternative form of novel physics, beyond the predictions of Newtonian gravity.

Credit: WillowW/Wikimedia Commons

How, then, would it be possible to:

  • construct a physical theory that married gravitation to this new concept of spacetime,
  • explain the precession of Mercury’s orbit,
  • incorporate special relativity into the mix,
  • and still be able to reproduce all of the earlier centuries of success that Newtonian gravity produced?

The “how” of how to do it wasn’t the idea of Einstein at all, but rather that of his friend and former classmate, Marcel Grossman. While Einstein had the idea of the equivalence principle, it was Grossman — the most mathematically adept of all of Einstein’s friends and peers — who had the idea to describe the Universe with non-Euclidean geometry as the spacetime fabric, rather than the Euclidean geometry of Minkowski space.

This makes sense, as this type of mathematical ground was Grossman’s specialty. In particular, Grossman had become an expert in Riemannian geometry, where two parallel lines did not necessarily always remain parallel, but could converge and meet or diverge and get farther and farther apart, as dictated by the (possibly curved) underlying geometry. Differential geometry and tensor calculus were precisely the language required to describe the Universe that Einstein was trying to picture, and Grossman was the one who put it all together. From Einstein and Grossman working together, a key paper emerged in 1913: Outline of a Generalized Theory of Relativity and of a Theory of Gravitation. This was the first of two fundamental papers that would lead to the establishment of general relativity as humanity’s best theory of gravity.

Unlike the picture that Newton had of instantaneous forces along the line-of-sight connecting any two masses, Einstein conceived gravity as a warped spacetime fabric, where the individual particles moved through that curved space according to the predictions of general relativity. In Einstein’s picture, gravity is not instantaneous at all, but instead must propagate at a limited speed: the speed of gravity, which is identical to the speed of light. Unlike conventional waves, no medium at all is required for these waves to travel through.

Credit: LIGO scientific collaboration, T. Pyle, Caltech/MIT

But even this specialty was not unique to Grossman and, through him, Einstein. Many brilliant minds had been developing it for decades, dating back to before the birth of both Einstein and Grossman. Absolute differential calculus, as a field, had been introduced by Elwin Christoffel in 1869. Many issues remained unresolved throughout the 1800s with that branch of mathematics, which only achieved completion in 1900 with the work of Gregorio Ricci and Tullio Levi-Civita. (These last names — Christoffel, Ricci, and Levi-Civita — will be familiar to anyone who’s studied general relativity.) There were numerous mathematicians studying precisely this field at the time, and one of them, the legendary David Hilbert, almost arrived at the equations that would describe gravitation in the Universe before Einstein did. (Although Hilbert was almost certainly aware of Einstein’s contemporaneous work.)

In every physical theory where you have mechanical motion, there’s a quantity you can define — known as “the action” — that must be minimized in order to figure out what the path of that object will be. In Newtonian mechanics, it was Hamilton’s principle of least action that led to the equations of motion; in the context of a general theory of relativity, a new action principle would have to be discovered. That action principle was formulated independently by both Einstein and by Hilbert at around the same time, and is today known as the Einstein-Hilbert action. It’s this action principle, when correctly applied to the physics of the system, that leads to the modern Einstein field equations.

A mural of the Einstein field equations, with an illustration of light bending around the eclipsed Sun: the key observations that first validated general relativity four years after it was first theoretically put forth: back in 1919. The Einstein tensor is shown decomposed, at left, into the Ricci tensor and Ricci scalar, with the cosmological constant term added in after that. If that constant weren’t included, an expanding (or collapsing) Universe would have been an inevitable consequence.

Credit: Vysotsky / Wikimedia Commons

None of this, of course, diminishes the actual genius of Einstein, nor does it take credit away from him for the breakthroughs that he himself made. He fully deserves credit for developing and putting forth all of the ideas for which he is credited: Brownian motion, the photoelectric effect, E = mc_²_, and both special and general relativity. He really did make those advances, and his contributions were the primary ones in all of those instances. Rather, these stories are to better provide context as to how these great advances were made. Einstein was not, as the common narrative often goes, a lone genius who was working outside of the strict confines of academia, who was able to revolutionize the field precisely because he was an outsider, unconfined by the dogmatic and restrictive teachings of his day.

Rather, it was precisely because Einstein had the education and background that he did — his own unique toolkit, as it were — that he was able to approach this variety of problems in a self-consistent, non-contradictory way. It was because of his friends and collaborators that he was exposed to ideas that helped him to progress, rather than stagnate. And it was because of his willingness and even eagerness to rely on the input and expertise of others, and to take inspiration from them and incorporate it into his own work, that his excellent ideas, many of which were profound but that began as mere seeds, were able to sprout into the towering achievements we still acknowledge today.

An animated look at how spacetime responds as a mass moves through it helps showcase exactly how, qualitatively, it isn’t merely a sheet of fabric. Instead, all of 3D space itself gets curved by the presence and properties of the matter and energy within the Universe. Space doesn’t “change shape” instantaneously, everywhere, but is rather limited by the speed at which gravity can propagate through it: at the speed of light. The theory of general relativity is relativistically invariant, as are quantum field theories, which means that even though different observers don’t agree on what they measure, all of their measurements are consistent when transformed correctly.

Credit: LucasVB

Back in 2021, I wrote an essay entitled, What if Einstein never existed? At the end, I contrasted the narrative of the lone genius with the attempts made to solve many of the outstanding problems of their time by other, less heralded scientists, and discovered that most advances would have occurred even without the person who made the key breakthrough.

  • Georges Lemaître and Howard Robertson each put together the expanding Universe independently of (and prior to) Edwin Hubble doing so.
  • Sin-Itiro Tomonaga worked out quantum electrodynamics independently of both Julian Schwinger and Richard Feynman, who did it independently of one another. (All three were recognized with the Nobel Prize in physics for the achievement.)
  • Robert Brout and Alexei Starobinskii each published papers with key realizations concerning what we now know as cosmic inflation, as did Rocky Kolb and Stephen Wolfram, well before Alan Guth’s revolutionary paper that’s generally acknowledged as the birth of inflation.

What would the world have been like without Einstein? Would we have ever come upon general relativity without him? I think the answer, without any serious doubt, is yes. Many others, even at the time, were close behind him, with several prominent scientists and mathematicians pursuing the same ideas contemporaneously. In fact, if he hadn’t listened to input from the world-class minds around him, Einstein wouldn’t have had anywhere near the successes or the impact that he did. Although our culture loves soundbites, with perhaps the most famous from Einstein being, “imagination is more important than knowledge,” these sorts of advances absolutely require both. Regardless of the ratio of “inspiration” to “perspiration” required, there’s simply no way around the need, if you want to make a meaningful advance, for expertise and hard work.

This article was first published in April of 2022. It was updated in February of 2026.

Read the whole story
bogorad
14 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Gentle parenting is doomed to fail | Maryanne Fisher » IAI TV

1 Share
  • RESOURCE-SEEKER: Children are active evolutionary agents biologically programmed to extract the maximum amount of resources from their parents and environment.
  • CONFLIT-DRIVEN: Parent-offspring conflict is a natural biological reality where children demand more investment than parents are predisposed to provide.
  • GENETIC-STRATEGY: Siblings compete for limited resources because they share only 50% of their genes with one another but 100% with themselves.
  • NURSERY-SELFISHNESS: Behaviors such as weaning resistance, regression, or bedwetting are tactical maneuvers used by children to divert parental attention and investment.
  • FAD-FAILURE: Parenting trends like "gentle" or "tiger" parenting fail because they incorrectly treat evolutionary conflict as a pathology to be cured rather than a feature.
  • DE-STIGMATIZATION: Recognizing these behaviors as survival programs rather than moral failings or "bad character" helps reduce parental guilt and emotional tension.
  • RESOURCE-MANAGEMENT: Resolving family tension requires managing the perception of scarcity through "ring-fenced" resources rather than focusing on discipline or personality shifts.
  • POLICY-REALISM: Effective social policies must bypass parental allocation bias by directing resources, such as educational vouchers or food, directly to individual children.

Children are active evolutionary agents adapted to extract the maximum resources from their environment, argues evolutionary psychologist Maryanne Fisher. She reveals why parenting fads like “gentle parenting” or “tiger parenting” that promise perfect harmony inevitably fail; and how we might better design social policy to support families as they actually exist, rather than as we wish them to be.

The family tug-of-war

We tend to view the family unit through a lens of inherent altruism. Parents are providers, children are receivers, and the entire family structure is magically engineered for mutual cooperation. When conflict arises, like when siblings fight over a toy, or a child demands attention at the expense of a parent’s exhaustion, we often treat these moments as aberrations, failures of discipline, or developmental hiccups to be smoothed over with the right technique. “If only I were better at applying gentle parenting methods,” a mother might think, “then my children would be so much better behaved!” 

However, evolutionary biology offers a starkly different, and perhaps more uncomfortable, perspective. Far from being passive recipients of care, children are active evolutionary agents adapted to extract maximum resources from their environment. In the context of the family, that environment includes, at its center, the parent. This parent-offspring conflict, as it is called, suggests that children are biologically predisposed to seek more investment than parents are predisposed to give, and to secure resources from siblings even at the expense of those siblings’ well-being. Understanding this biological reality is not an exercise in cynicism. Rather, it provides a powerful framework for making sense of family tension. It explains why parenting fads that promise perfect harmony inevitably fail, why sibling rivalry is so persistent, and how we might better design social policy to support families as they actually exist, rather than as we dream or wish them to be.

The economics of genes

To understand why a child might act against the interests of their parents or siblings, we must look at the genetic arithmetic involved. In the 1970s, biologist Robert Trivers formalized the theory of parent-offspring conflict. The logic is rooted in genetic relatedness. A parent is equally related to all their biological children (sharing approximately 50% of their genes with each). Therefore, from the parent’s evolutionary perspective, resources should generally be allocated in a way that maximizes the survival and reproductive success of all offspring collectively. Importantly, optimal allocation does not require an equal distribution of resources among children, since their needs and capacities differ across developmental stages. Parents may, for example, invest more (or less!) intensively in a vulnerable infant or support the educational pursuits of an older child, as circumstances warrant.

___

Far from being passive recipients of care, children are active evolutionary agents adapted to extract maximum resources from their environment.

___

The child, however, has a different calculation. While it might seem obvious, it is important to remember that a child shares about 50% of their genes with a full sibling, but 100% of their genes with themselves. While they have an evolutionary interest in their sibling’s survival, that interest is only half as strong as their interest in their own survival. Consequently, a child is naturally selected to demand more resources, such as food, attention, or protection, than the parent may want to provide, and to take those resources even if it incurs a cost to a sibling.

This is the “selfishness” of the gene manifesting in the nursery. It explains why we observe weaning conflicts in mammals, where offspring demand milk long after it is optimal for the mother to stop nursing. It illuminates the phenomenon of regression, where an older child reverts to baby-like behavior upon the arrival of a new sibling. Indeed, observational studies have noted that many children display regressive behaviors such as bedwetting or demanding to be carried following the birth of a sibling. These are strategic attempts to divert parental investment back toward themselves.

When resources are perceived to be scarce (whether those resources are caloric, financial, or emotional), this conflict between the parent and child, or siblings with each other, intensifies. The evolved need to survive pushes the organism, unconsciously, to secure its share before it is depleted. In many people’s modern Western contexts, “scarcity” might not mean famine; it might mean a parent’s limited time after a long workday or a finite budget for extracurricular activities. The child’s adaptation detects the limit and triggers behaviors designed to secure their portion, and encroach on that reserved for a sibling if possible.

The failure of parenting fads

This evolutionary framework exposes the fundamental weakness of many modern parenting trends. From “tiger parenting” to “gentle parenting,” “whisper parenting,” or “attachment parenting,” the parenting industry is cyclical and fad-driven. Each new wave promises that if parents simply adopt a specific set of behaviors, controlling inputs or modulating emotional outputs, they can curate a perfectly harmonious child. These approaches fail to gain long-term traction because they operate on a false premise: that conflict is a pathology to be cured rather than an evolved tendency.

___

Realizing there is “nothing to cure” or that the conflict is a feature, not a problem, can be liberating.

___

Take “gentle parenting,” which emphasizes empathy and understanding the root of a child’s behavior. While emotionally supportive, it can sometimes leave parents baffled when a validated, understood child still acts with intense selfishness or aggression toward a sibling. The parent feels they have failed the method. But the method has failed to account for the child’s evolved psychology. No amount of whispering or validation removes the evolutionary advantages of competing for limited resources. Similarly, authoritarian styles like “tiger parenting” attempt to override these drives through strict discipline and sometimes extremely high expectations. While this style might suppress the outward expression of conflict, it does not remove the underlying evolutionary-based divergence of interests. It simply forces the resource-extraction strategies underground, potentially manifesting later as different forms of rebellion or psychological distress.

These styles are, eventually, ultimately disproven or discarded as fads, not because they lack good intentions, but because they do not recognize the core dynamic of conflict. They treat the child as a “tabula rasa” or a project to perfect, rather than as an autonomous agent, shaped by centuries of ancestral DNA, carrying successful strategies for survival and reproduction, and driven by their own survival agenda. Realizing there is “nothing to cure” or that the conflict is a feature, not a problem, can be liberating.

The therapeutic power of biology

Why is this information important for us to come to grips with? The shift from a purely social constructionist to an evolutionary psychological perspective can be incredibly de-stigmatizing.

In family therapy, parents often carry immense guilt over sibling rivalry or their child’s demanding behavior. They internalize these conflicts as evidence of their own inadequacy or their child’s “bad” character. A therapist armed with an evolutionary perspective can reframe the narrative. When a child steals a toy from a sibling or screams for attention the moment a parent picks up the phone, the child is not trying to be “malicious.” They are executing a behavioral program that, in the ancestral environment, ensured survival.

related-video-imageSUGGESTED VIEWING Tiger mothers and cultural success With Frank Furedi, Barry C. Smith, Melissa Benn, Kate Williams

This understanding helps lower the emotional temperature. It moves the problem from the realm of moral failing to the realm of resource management. If conflict is driven by perceived scarcity, the solution is not necessarily more discipline (imposing costs) or more validation (ignoring costs), but rather altering the perception of scarcity. For example, if siblings are fighting over parental attention, the evolutionary lens suggests that they perceive attention as a zero-sum resource. A therapeutic intervention might focus on creating structures that guarantee “ring-fenced” time for each child, thereby reducing the anxiety of scarcity that triggers the competitive behavior. Here, time is set aside or reserved for a particular purpose for a specific child, ensuring it is protected from being used for anything else. The parents stop trying to “fix” the children’s personalities and start managing the resource environment.

Furthermore, this perspective aids in acceptance. Just as we accept that a toddler will stumble while learning to walk because of biomechanics, we can accept that children will behave in ways that create conflict with parents because of the advantages this behavior had during our long evolutionary history. This acceptance reduces the “second arrow” of suffering—the distress we feel about being distressed.

Implications for policy beyond the idealized family

The implications of parent-offspring conflict theory extend well beyond the living room and into the legislature. Social policies are often designed around an idealized model of the family as a frictionless cooperative unit. When policy ignores the reality of resource competition, it is less likely to succeed. Consider policies related to family welfare and child support. A naïve model assumes that resources given to a household head (usually a parent) will be distributed optimally among all members. However, evolutionary theory suggests that parents and children have different definitions of “optimal.”

Take education policy as an example. Suppose a government introduces a policy (Policy X) that provides a lump-sum grant to parents to cover educational costs for all their children, assuming the parents will distribute it according to the educational needs of each child. Evolutionary theory warns us that parents might unconsciously invest more heavily in the child they perceive as having the highest reproductive potential or the one who aligns most with their own interests, potentially neglecting a less “promising” or more difficult sibling.

___

We force families to conform to a model of pure altruism that does not exist in nature.

___

A more effective policy (Policy Y) informed by this theory would connect resources directly to the individual child. Instead of a household grant, the state provides individual scholarship accounts or vouchers in the child’s name that cannot be transferred to a sibling, and that are not informed by a parent’s favouritism. This bypasses the parental allocation bias and the sibling competition dynamic, ensuring that the resource reaches the specific biological agent for whom it was intended. Of course, the allocation of such funding may still be influenced by factors such as the child’s prior academic achievements, individual interests, or other relevant criteria, but it constitutes a foundational step toward more equitable resource distribution.

The other end of the life path yields a similar situation. As parents age, the flow of resources often reverses, from child to parent, but the conflict remains. Siblings often fight over who bears the burden of caring for aging parents and how inheritance is divided. A policy that relies on informal family agreements to manage elder care (assuming sibling cooperation) often leads to acrimonious breakdowns and inadequate care for the senior. Recognizing the competitive nature of siblings, effective policy might incentivize caregiving through formal state mechanisms like tax credits specifically for the caregiving child, rather than leaving it to vague familial obligation. By formalizing the resource exchange, the state acknowledges the conflict and provides an external structure to mitigate its existence.

As a third example, in resource-poor ecologies, nutritional programs often target the “family.” But if food is scarce, stronger siblings may dominate the food supply at the expense of weaker ones, regardless of caloric need. A policy informed by evolutionary insight would avoid handing out bulk rations to be divided at home. Instead, it would prioritize school feeding programs where children are fed individually under supervision. This removes the resource from the arena of sibling competition or parental favouritism and ensures the needs of the individual child are met directly.

Realism can be a tool for harmony

To say that children are adapted to exploit parents and siblings is not to say that love does not exist. Humans possess a remarkable capacity for altruism. Evolutionary explanations do not assert that individuals are controlled solely by their genetic inheritance; rather, they recognize that behaviour arises from the ongoing interaction between genetic predispositions and environmental influences, and that “environment” can include one’s family, social context, and culture. Thus, from an evolutionary view, the experience of parenting is one of deep love punctuated by baffling conflict. By ignoring the evolved roots of that conflict, we deny the reality of the experience. We force families to conform to a model of pure altruism that does not exist in nature.

Embracing the evolutionary perspective that the family is a unit that engages in both cooperation and conflict of interest allows us to move forward with clearer eyes. It helps us forgive ourselves for the fights that break out. It helps therapists guide families away from blame and toward resource management. It helps policymakers design systems that work with human nature, rather than against it. When we understand the “selfish” child not as a moral failure but as a survival specialist, we stop trying to cure them. Instead, we start trying to build a world where their survival does not have to come at such a high cost to everyone else.

Read the whole story
bogorad
14 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Will Life on Mars Require a Genetic Rewrite? | The MIT Press Reader

1 Share
  • Biological challenges: Genetic adaptation is necessary to survive the extreme radiation, microgravity, and harsh climates found on Mars and in deep space.
  • Directed evolution: Proponents suggest preemptive genetic modification of humans as a more efficient and humane alternative to the slow and painful process of natural selection.
  • Extremophile integration: Researchers are testing the insertion of genes from hardy organisms, such as the "damage suppressor" protein from tardigrades, into human cells to improve radiation resistance.
  • Precision editing: Modern CRISPR technology allows for programmable, precise alterations to the "code of life," offering a technical advance over older gene-splicing methods.
  • Germline implications: Editing reproductive cells would permanently alter the human lineage, ensuring subsequent generations inherit survival traits without requiring repeat treatments.
  • Ethical context: Historical abuses of eugenics and forced sterilization necessitate rigorous ethical frameworks and caution when pursuing the purposeful manipulation of human evolution.
  • Moral obligation: The philosophy of "deontogenics" posits that humans have an ethical duty to engineer life to prevent species extinction and ensure survival beyond Earth.
  • Human speciation: The use of synthetic chromosomes or mechanical enhancements may eventually cause a genetic split, resulting in space-dwelling populations becoming distinct species from Earth-based humans.

Will Life on Mars Require a Genetic Rewrite?

Microgravity, radiation, and extreme climates pose ethical and biological challenges that researchers are racing to overcome.

MIT Press Reader/Source image: Adobe Stock

By: Scott Solomon

BeeLine Reader uses subtle color gradients to help you read more efficiently.

Chris Mason is a man in a hurry.

“Sometimes walking from the subway to the lab takes too long, so I’ll start running,” he told me over breakfast at a bistro near his home in Brooklyn on a crisp autumn morning. “Just so I can get there faster. Not because I’m late for a meeting, just because it’s taking too long to walk…I’m the only one I know who runs to work to get there faster.”

This article is adapted from Scott Solomon’s book, “Becoming Martian.”

Mason is a professor of physiology and biophysics at Weill Cornell Medicine. At least that’s his official title. He seems to be working on a hundred different projects all at once, ranging from tracking changes in the virus that causes COVID-19 to helping corals adapt to climate change.

The previous day, I had visited his research group on the Upper East Side. The Mason Lab occupied four separate laboratories across three buildings and was still growing. Although they were pursuing a wide range of projects, a major focus of their work was on how the human genome and microbiome are affected by spaceflight. What Mason and his researchers know for sure is that settlement of space will lead to major changes to our biology, one way or another.

If we let these changes unfold naturally, evolution will take its course, and people on Mars will gradually become better adapted to the conditions there through mutation and natural selection. Founder effects and genetic drift will cause random changes in Martians and reduce their genetic diversity. Enforced quarantines due to the risk of spreading infectious diseases could accelerate the speciation process. But these natural evolutionary processes will be relatively slow and, to put it mildly, quite unpleasant. What if we could accelerate the process of adaptation and minimize the human suffering that it would otherwise entail?

Mason thought that we could — and he laid out his argument in detail. “One possibility is we simply allow evolution to gradually select for characteristics required to survive on these new planets,” he wrote in his book, “The Next 500 Years.” “This is basically the ‘sink or swim’ approach to life’s survival, except with no lifeguards and bricks tied to your feet.”

However, there is an alternative: “Our second option to enable Earth’s life to live on other planets is to preemptively direct this genetic process, so that the life we send is already capable of surviving in its new home. More complex, yes — but also more humane.”


The basic idea of acquiring new abilities by taking DNA from one organism and putting it into another has existed since the 1970s. In 1972, biochemist Paul Berg became the first person to do this when he copied a short piece of DNA from a virus that attacks bacteria into another kind of virus that attacks monkeys. The following year, Herbert Boyer and Stanley Cohen applied this “gene splicing” technique to insert genes from one species of bacteria into another. They found that the inserted gene persisted in subsequent generations as the bacteria divided. Going one step further, they then spliced genes from a frog into a bacterium, and found that the frog genes became a permanent addition to the bacterium’s genome.

This was the dawn of a revolution in biotechnology. Recombinant DNA — meaning DNA copied from one organism and pasted into another — could be used for an incredible number of things, from producing life-saving medications like insulin to more whimsical applications like making glow-in-the-dark cats and goldfish. But it was also the beginning of an era in which humans could directly control the evolution of any species by manipulating their DNA. Mason saw the potential to take genes from organisms naturally well-adapted to harsh conditions and insert them into human cells to help prepare people for the hazards beyond Earth.

Settlement of space will lead to changes, one way or another. If we let it unfold naturally, evolution will take its course.

One candidate for such a hardy creature is the water bear, or tardigrade. Tardigrades are distant relatives of insects but have a unique appearance. They are barely visible to the naked eye, but under a microscope, they look like tiny gummy bears with eight chubby little legs and mouths shaped like nozzles. They thrive in moisture, but their adaptability allows them to live almost anywhere, from the sea to the soil in your backyard. One of the ways they are able to live in such a wide range of habitats is by tolerating long periods of harsh conditions — say, a drought — by essentially shriveling up. In their dehydrated state, they are almost invincible, which is what has drawn the attention of biologists interested in life in outer space.

In 2016, a team of Japanese researchers led by Takekazu Kunieda and Atsushi Toyoda sequenced the genome of one particularly hardy species of tardigrade. In the process, they discovered that tardigrades produce a protein that helps them survive in a dehydrated state. They named the protein “damage suppressor,” abbreviated Dsup. The researchers then took a major leap: They extracted the Dsup gene from the tardigrade genome and temporarily spliced it into human cells. To be clear, the human cells were growing in a laboratory, not a human body. Nevertheless, they found that when the tardigrade gene was inserted, human cells could produce Dsup. And — most significantly — when they exposed the human cells making Dsup to radiation in the form of X-rays, the cells were less damaged and better able to grow than normal human cells.

Chris Mason’s lab began working to further improve human cells’ ability to withstand the harsh conditions of space by splicing in genes from tardigrades and other organisms that survive in extreme environments. He sees this as the beginning of an era in which human cells can be endowed with a great variety of abilities. He predicts that by the year 2040, “genes from all organisms will become a playground for creating and making new functions in human cells.”


The notion that the diversity of life on Earth represents a genetic “playground” for us to draw from seems exciting, but it is also fraught with risks, ranging from the biological to the ethical.

Indeed, after the first demonstrations of gene splicing, researchers quickly recognized the double-edged-sword nature of the new technology. A voluntary moratorium was called on the use of recombinant DNA. A conference was held in 1975 at Asilomar Beach in California, where many leading researchers came together to develop guidelines for its use. The meeting, which has variously been called “Woodstock for molecular biology” and “the Pandora’s box conference,” was in part an attempt by researchers to come up with their own limits on the use of genetic technology in the hopes that doing so would prevent government regulations. Indeed, after four days of meetings, the researchers agreed to lift their own self-imposed moratorium on the use of recombinant DNA, albeit with some guardrails intended to prevent what they saw as its most potentially dangerous uses.

It seemed plausible that some genetic diseases could be cured with recombinant DNA by swapping out the section of DNA responsible for the condition with DNA from a healthy donor.

The first success came in 1990 with a child named Ashanthi DeSilva. At the age of two, she had been diagnosed with SCID — the same condition as the “bubble boy,” David Vetter, that results in a nonfunctional adaptive immune system. In 1990, when DeSilva was four years old, she was given a fully approved experimental gene therapy to replace the cells in her bone marrow that cause the condition. It worked. With the modified genes, DeSilva’s immune system began functioning well enough for her to go outside, attend school with other kids, and lead a normal life.

Another major breakthrough came with the discovery of a way to edit DNA directly. It happened, as many scientific discoveries do, in a roundabout way. In 1990, Francisco Mojica was a graduate student at the University of Alicante in Spain, studying a type of single-celled microbe called archaea. After sequencing some of their DNA in hopes of learning how they tolerate so much salt, he found something unexpected. In between sections that looked to him like normal DNA, with the usual combination of all four DNA bases A, T, C, and G, were sections that kept repeating the same bases. Even stranger, these repetitive sections were also palindromes, meaning they could be read the same way forward and backward. He found 14 of these sequences clustered at regular intervals around otherwise normal DNA sequences.

Puzzled, Mojica searched the scientific literature for anything similar in other organisms. He only found one, which seemed to share the same peculiar cluster of repetitive DNA sequences. He published his results, unsure of the sequences’ function. He would later give them a cumbersome name with a catchy acronym — clustered regularly interspaced short palindromic repeats, or CRISPR.

Is it ethical to make decisions that will directly affect future generations who will not have any choice in the matter?

Soon, CRISPR sequences were found in a wide range of other microorganisms. Researchers in the dairy industry found them in the bacteria that ferment milk into cheese and yogurt. Intriguingly, they noticed that new CRISPR sequences appeared in the dairy bacteria after an attack by viruses — and that the CRISPR sequences matched sequences from the viruses’ genomes. What’s more, the bacteria with the new CRISPR sequences were no longer vulnerable to attack from the same virus. The CRISPR sequences were acting as a type of immune response by the bacteria: The bacteria were learning to recognize the virus so that they could defend against it in the future.

The mechanism for how CRISPR works was figured out by a team of researchers led by biochemists Jennifer Doudna and Emmanuelle Charpentier at UC Berkeley. They discovered that CRISPR works with the help of proteins, called Cas — short for CRISPR-associated proteins. Cas proteins, such as Cas9, cut DNA like a molecular scalpel. Bacteria use CRISPR-Cas9 to recognize the unique DNA of a particular virus and then chop it up to destroy it. But what Doudna and Charpentier also found is that they could control which DNA sequence was targeted. It didn’t have to be DNA from a virus. It could be DNA from any living thing. If the DNA is inside a living cell, the cell’s machinery will naturally repair the damage. But the most exciting part of all was that Doudna and Charpentier found that they could manipulate the repair process so that a stretch of DNA could be cut out and replaced with any sequence they wanted. In other words, it was programmable.

“In the history of science, there are few real eureka moments, but this came pretty close,” wrote Doudna biographer Walter Isaacson about the breakthrough. Unlike the copy-and-paste gene-splicing approach, CRISPR can make precise, deliberate edits to an organism’s genes. “In short, they realized that they had developed a means to rewrite the code of life,” wrote Isaacson.

Clinical trials were underway years later to test whether CRISPR could be used to treat conditions ranging from diabetes and blood disorders to certain forms of cardiovascular disease and cancer. By 2023, the first two CRISPR-based treatments were approved in the United States — one for sickle cell disease and another for the blood disorder beta-thalassemia.

There is a catch, however. While the hope is that patients receiving CRISPR treatments will be fully rid of their diseases, the gene-editing approaches approved so far would not prevent any of their children from inheriting their parents’ diseases. The genetic changes are made only to DNA in somatic cells — the cells of the body that are not involved in making sperm or eggs. For their children to be cured, they would need to undergo the same treatment as their parents. The same would be true of every subsequent generation.

The alternative would be to make edits to cells in a way that affects not only somatic cells but also germline cells — those that become eggs, sperm, and eventually embryos and then babies. Germline gene editing is possible, although it crosses a line that some believe should not be crossed. The reason is that any edits made to germline cells will affect all the descendants of the individual receiving the treatment, for countless generations. This raises new types of ethical questions. It is one thing to perform a procedure on a living person, who can be educated about the potential risks and benefits and who can give their informed consent. Is it ethical to make decisions that will directly affect future generations who will not have any choice in the matter?


Changing the germline means we are, whether we realize it or not, controlling the future of evolution. Yet while the techniques of gene editing are new, the idea that we humans can guide evolution is not. As Chris Mason pointed out, we have been doing so for millennia through the practice of selective breeding in agriculture and the domestication of animals.

“While controlling the evolution of the past, present, and future seems scary and wrought with incredible hubris, the reality is that we already have been engineering and modifying species and the environment around us, except previously we were doing so by accident with no foresight,” Mason wrote. “Now, finally it can be done with a sense of responsibility and purpose.”

Yet the idea of purposefully controlling the evolution of our own species has a dark history.

Any discussion of manipulating the future of human evolution has to consider…the ways in which those efforts were perverted and abused.

In 1883, Francis Galton proposed improving our species through selective breeding in much the same way we do for animals, which he described as “the science of improving stock.” Among his investigations were the first studies of twins and an attempt to determine which physical characteristics criminals had in common so they could be recognized before committing crimes. Based on his observations, Galton thought it would be possible to make the characteristics he considered positive — such as good health, intelligence, and responsibility — more common in society by encouraging marriages between people from families with a history of these traits. He called this idea “eugenics.”

As Galton’s ideas spread, they also evolved. In addition to encouraging the breeding of people with supposedly good characteristics, some sought to achieve similar results by preventing the reproduction of people with traits they considered undesirable. The first government to enact laws based on eugenics was the state of Indiana in 1907, followed soon after by 31 other U.S. states. The laws included forced sterilization for people labeled “criminals, idiots, imbeciles, and rapists.” The issue was brought before the Supreme Court in 1927. The question was whether a 21-year-old woman named Carrie Buck could be surgically sterilized because she had been labeled an “imbecile,” which the prosecution argued was hereditary. In an 8–1 ruling, the Court determined that forced sterilization was indeed legal.

In Germany, the Nazi Party modeled its policies on the American eugenics laws. They passed a law in 1933 that mandated surgical sterilization for anyone they determined to be carrying a “hereditary disease.” But sterilization was just the first step. Soon, the Nazi efforts of “racial hygiene” would include murder and genocide.

The atrocities committed in the name of eugenics in the first half of the 20th century were based not only on prejudiced and racist views, but also on flawed science. We now know that there is little, if any, genetic basis for the traits that proponents of eugenics sought to control. Still, any discussion of manipulating the future of human evolution must consider the flaws inherent in previous attempts to do so, as well as the ways those efforts were perverted and abused.

Questions about the ethical use of gene editing become more complex when considering humans on other planets. Would it be ethical to change a gene to make a person traveling to Mars better able to tolerate lower gravity or higher radiation? What about for a child born on Mars? Could genome editing make it easier to allow people to move safely between planets, for example, by altering their immune systems?

Chris Mason sees gene editing in the context of space settlement as a moral imperative. “Sending any Earth-evolved organism to another planet would result in almost certain death, which represents the sad, evolutionary ‘good luck’ plan,” he wrote. “To save life, we will need to engineer it.”

Mason’s reasoning is based on an ethical philosophy he calls “deontogenics.” According to this way of thinking, as a species that is aware of the possibility of our own extinction and that of other species, we have an ethical obligation to try to prevent that from happening. “Any act that consciously preserves the existence of life’s molecules . . . across time is ethical. Anything that does not is unethical,” Mason wrote.

With this framework in mind, Mason and his research team are pressing forward on genetically engineering human cells to make them better adapted for conditions beyond Earth. They have had some success with getting human cells to produce the Dsup protein that helps tardigrades survive in space. So far, their work involves only human cells being grown in a lab, but he hopes that will soon change. “I’d say human trials are 10 years away,” he told me.

A list of other genes that could be modified to help people to deal with life on Mars and elsewhere has been identified by George Church, Chris Mason, and colleagues at Harvard’s Consortium for Space Genetics. They include genes that influence bone density, muscle tone, radiation resistance, and even pain tolerance. In part, the list comes from studies of existing genetic variation within people alive today. It also comes from organisms capable of living in extreme environments, like tardigrades and others.

“Any act that consciously preserves the existence of life’s molecules…across time is ethical.”

One particularly hardy species of bacteria was first discovered in the 1950s in a can of meat that had been exposed to a whopping dose of 5 million millisieverts of radiation. The goal was to determine whether radiation could be used to sterilize canned foods and make them safe to eat. Yet the bacteria were still alive. The researchers identified them as belonging to the genus Deinococcus and named the species radiodurans in reference to their remarkable ability to endure such high radiation exposure. Even tougher bacteria, like the appropriately named Thermococcus gammatolerans, have been found in the water used to cool nuclear power plants. The genetic basis of these species’ abilities to withstand radiation is being investigated by Mason and his colleagues for their potential use in engineering life beyond Earth.

Another approach Mason is researching is to genetically engineer genes in bacteria and other microbes in our microbiome to produce useful products, including Dsup. This way, no changes to human cells would be required, but people might still reap the benefits if the substances produced by microbes are active within the human body. They already have some microbes in the lab that seem capable, he told me, but so far, they have not tested whether the microbes would work the same way when living in humans.

“It’s still a few years before we do a trial like that,” Mason said.


So, we can edit our genes or those of our microbial partners. But there is yet another way that genetic technology may facilitate the human migration into space: by creating genes that do not yet exist.

In the first decades of the 21st century, the field of synthetic biology emerged with the goal of creating new functions for living things using genetic tools. Scientists at the J. Craig Venter Institute created the first complete synthetic genome of a simple organism, a type of bacteria, in 2010. Work has since been underway to create synthetic genomes for more complex organisms. Eventually, synthetic human genomes might be possible.

One idea, suggested to me by biologist Tiffany Vora, is to create synthetic portions of a human genome. For example, while humans normally have 23 pairs of chromosomes, one or more new chromosomes could be added to augment our existing genome. “The idea that we’re going to find all the mutations we need in Earth’s situations — I don’t believe it, because we’re fundamentally looking for a non-Earth context,” Vora told me. The advantage of this approach is that the existing genome could be left untouched. “If you can make really long artificial chromosomes, then you don’t have to change the person — you just give them a patch, essentially.”

This raises the possibility that future humans with additional synthetic chromosomes may be genetically incompatible with people without them. If used for space settlement, this could be yet another force driving a wedge between humans from Earth and humans living elsewhere. Adapting to life in space may require genetic engineering, but engineering people for space might also contribute to a split in humanity. At some point, people may have to choose between prioritizing adaptation for life on other planets and maintaining human beings as a single species. It might not be possible to achieve both.

Other ideas for how to use technology to help people adapt to life beyond Earth include enhancing our bodies with mechanical, electronic, or robotic components. We are already accustomed to wearing glasses, using hearing aids, prosthetic limbs, artificial hearts, and many other devices to improve human health and well-being. Brain–computer interfaces can be added to the list.

“I wake up almost every morning and think about the Sun engulfing the Earth,” he told me. “It’s almost the first thought in my mind.”

Numerous private companies working on brain–computer interfaces have recently emerged, suggesting the technology is maturing. In 2024, Neuralink — a company owned by Elon Musk — implanted its first experimental device in a human patient. As the technology improves, brain–computer interfaces will allow better control of artificial limbs and exoskeletons as well as other devices such as vehicles, robots, and more.

These technologies could certainly be helpful for life on other planets. Connecting the brain to devices that enhance the senses could give people the ability to see or hear in ways that our eyes and ears cannot do on their own. Imagine a Mars rover, with all of its sophisticated tools and machinery, controlled entirely by the human mind. Now imagine that you are the rover. Humans with these enhanced abilities could become the most capable and best-adapted Martians.


If humans — in one form or another — are going to ever leave our solar system, Mars will be an important stepping stone. On Mars, humanity will learn to create and sustain new settlements. Chris Mason thinks of this first, cautious step in humanity’s lifetime as being like going to college. “Leaving the house you grew up in, traveling just out of the reach of your parent’s ability to instantly help you, and testing your limits, boundaries, and potential — all while having fun, learning a lot, and likely getting into trouble,” he wrote about our first settlements on Mars.

If we do manage to spread out and survive on planets scattered across our solar system and others, we should expect to evolve, adapt, and speciate everywhere we go. Like tortoises and finches on Earthly islands, the conditions on each of the cosmic islands will influence how the people there will evolve. Some may choose to let the natural forces of mutation, natural selection, and genetic drift determine how they change. Others may decide to take matters into their own hands, using technology to guide the process.

To ensure we are ready, Chris Mason is moving forward with his work on engineering the genes of living things — humans and microbes — for their future in space. Despite often thinking in timescales that involve hundreds, millions, or even billions of years, he sees his work as urgent.

“I wake up almost every morning and think about the Sun engulfing the Earth,” he told me. “It’s almost the first thought in my mind. It’s a cosmological fact. I see the Sun every morning. It’s still there, and it’s only going to get bigger…I only have so much time…I’ll have another, say, thirty years, forty years, maybe, of productive work I could do. Maybe fifty, at most. But that’s it. I don’t have 500 years…I want to do as much as I can.”

Suddenly, his fast talking made a little more sense.


Scott Solomon is a Teaching Professor at Rice University in Houston. He is also a Research Associate at the Smithsonian Institution’s National Museum of Natural History and the author of “Future Humans” and “Becoming Martian,” from which this article is adapted.

Read the whole story
bogorad
14 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories