Strategic Initiatives
11540 stories
·
45 followers

On the assassination of Charlie Kirk, and what it means for us all

1 Share
  • Who/What/When/Where/Why: Alex Berenson (2016–2024, U.S.) ties left institutional control and a campaign to police online speech to struggles over social media and an eventual shooting at Utah Valley University aimed at silencing free-speech voices.
  • Institutional control: Claims the left dominated legacy media, Hollywood, and academia and dictated which facts and viewpoints were acceptable.
  • Core narrative shifts: Attributes a cascade of accepted ideas to that control, including explanations for Black incarceration, drug policy, immigration benefits, and gender as a social construct.
  • Right-wing response: Notes early conservative platforms (Rush Limbaugh, Fox) and later the mid-2010s social-media surge that broke elite speech monopolies.
  • Government pressure on platforms: Alleges escalating Democratic and White House efforts after 2016, centering on Covid/mRNA vaccine censorship and culminating in a Feb 2022 DHS bulletin linking “misleading narratives” to terrorism.
  • Platform dynamics: Describes tech firms’ resistance, Facebook’s stock decline (Sept 2021–Dec 2022), Elon Musk’s 2022 Twitter purchase to expand speech, and Spotify defending deplatformed creators.
  • Legal fallout: Reports Berenson sued the Biden administration and Pfizer claiming a conspiracy to silence his Covid reporting, while free-speech organizations and legacy media declined to support or largely ignored the case.
  • Charlie Kirk and the event: Portrays Kirk as a rising, free-speech advocate who drew thousands to Utah Valley University, faced intensified left hostility, and was struck by a single shot as he addressed the audience.

(Part 1)

Want to find a society’s power center?

Follow an assassin’s bullet.

For decades, the left used its control of the legacy media, Hollywood, and academia to decide not just what issues but even what facts could be discussed. It deemed those it finds ideologically uncomfortable to be outside the pale. The original sin came with the frank denial that differing rates of criminality — rather than police or criminal justice disparities — are the reason so many black people are in prison.

Once the media agreed to that lie, the rest followed: That drug use was harmless and should be medicalized and decriminalized; that open immigration always benefitted society and was opposed only by bigots; that men and women had no innate differences and gender was a social construct.

(Free speech, today more than ever)

Right-wing opposition came slowly, first on radio, led by Rush Limbaugh, then with Fox News. These outlets were powerful and profitable individually. But the legacy media collectively dominated them, and the left’s dominance of Hollywood and academia only became more complete.

But by the mid-2010s, social media had crashed the elite monopoly on speech: Facebook, YouTube, Twitter, places where Americans and people everywhere could speak to each other without gatekeepers, to joke and debate and talk about how they really felt.

Donald Trump intuitively felt and used the power of this new media more effectively than anyone else.1 He saw how many Americans felt disenfranchised and unheard, and not just in rural America.

So the left, feeling its hold on power slipping, fought one last furious battle to force social media to heel.

The attack began after Trump’s initial election in 2016. At first it used mostly social pressure and was aided by the left-wing slant of the non-engineering employees that social media companies hired during their vast expansion in the 2010s.

But, over time, Democratic pressure became more openly political, first from Congress and then after Joe Biden won in 2021 from the White House itself. The White House at first focused its censorship efforts mostly on Covid and the mRNA vaccines, the key issue in 2021. Working with Pfizer, it forced Twitter to ban me over my Covid reporting in the summer of 2021. But by the end of 2021, it was looking past Covid.

In February 2022, the administration moved to attack speech far more directly, issuing a bulletin that would have classified much of it as terrorism.

As I wrote at the time:

The White House has begun an extraordinary assault on free speech in America. It is no longer content merely to force social media companies to suppress dissenting views. It appears to be setting the stage to use federal police powers.

How else to read the “National Terrorism Advisory System Bulletin” the Department of Homeland Security issued on Monday? Its first sentence:

SUMMARY OF THE TERRORISM THREAT TO THE UNITED STATES: The United States remains in a heightened threat environment fueled by several factors, including an online environment filled with false or misleading narratives and conspiracy theories... [emphasis added]

You read those words right.

The government now says “misleading narratives” are the most dangerous contributor to terrorism against the United States.

[

EXTREMELY URGENT: The Biden Administration says I'm a terrorist threat.

](https://alexberenson.substack.com/p/extremely-urgent-the-biden-administration)

Alex Berenson

·

February 9, 2022

EXTREMELY URGENT: The Biden Administration says I'm a terrorist threat.

[

Read full story

](https://alexberenson.substack.com/p/extremely-urgent-the-biden-administration)

Lucky for all of us, the United States wasn’t ready to define speech as terrorism.

And social media companies didn’t want to be forced to police every comment their users made. Section 230 gave them protection if they did, but Facebook in particular depended on a mass audience and didn’t want to alienate Republicans and independents.

Facebook’s stock plunged almost 75 percent between September 2021 and December 2022. The company’s failed bet on the “metaverse” was one reason, but the political fights it faced did not help.

Meanwhile, Elon Musk agreed to buy Twitter in 2022 with the avowed goal of making it a haven for free speech — as he has (after a few early missteps). Incredibly, his plans sparked anger on the left and among legacy media. Free speech equaled Nazis, they insisted.

Spotify, too, stood up to the efforts in 2021 and 2022 to “deplatform” (a hideous Orwellian word) Joe Rogan and other conservative and contrarian voices.

So 2022 represented the high-water mark for the left’s would-be effort to take over social media and normalize censorship on the new platforms. It still controlled the legacy media, but by 2022 the legacy media no longer controlled the window of acceptable speech, or much else — a source of great angst for it.

But the left’s feelings about free speech, its new dislike for the fundamental principle that every point of view should be allowed to compete in the marketplace of ideas, didn’t improve.

In fact, they hardened, as I wrote in 2023, pointing to a poll that showed “an overwhelming number of Democrats no longer support First Amendment protections for free speech.”

[

The scariest poll you'll see this summer

](https://alexberenson.substack.com/p/the-scariest-poll-youll-see-this)

Alex Berenson

·

July 22, 2023

The scariest poll you'll see this summer

A majority of Americans - and an overwhelming number of Democrats - no longer support First Amendment protections for free speech.

[

Read full story

](https://alexberenson.substack.com/p/the-scariest-poll-youll-see-this)

I found this out myself the hard way.

After I discovered the extent of the 2021 censorship efforts against me, I decided to sue the Biden Administration and senior Pfizer officials. As most of you know, I had, and have, proof that they conspired to force Twitter into banning me because of my writing about Covid mRNA vaccines.

The legal issues around the case are complex, but the facts are simple: the federal government and a company that made $100 billion from selling a product deprived a journalist writing about that product of his audience. This is a scandal, by any measure.

But even the organizations that supposedly care most about censorship, like PEN America (whose motto is “The Freedom to Write”) had no interest in supporting Berenson v Biden. And the legacy media essentially ignored it.

Free speech? Only for speech the left liked.

(And, with your help, for everyone. Please stand with me at this crucial moment.)

Charlie Kirk believed in free speech to his core. He believed in discussion and debate. And he was good at it, he was energetic and charismatic and could connect with young audiences.

And by all accounts — including the results of the 2024 election — he was winning.

(Norman Rockwell, updated)

[

](https://substackcdn.com/image/fetch/$s_!u6ez!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F661d9b94-0583-4ead-aeb4-78d54feea6cd_1290x1659.jpeg)

As Kirk’s audience grew, so too did the left’s hatred for him.

This was the state of play as he sat before an audience of thousands (thousands!) at Utah Valley University, a state school that offers four-year degrees for $6,500 a year to Utah residents — the promise higher education at an affordable price, a promise so much of the self-proclaimed progressives in academia have forgotten.

They’d turned out under the September high desert sun by the thousands (yes, thousands!) to hear Charlie Kirk, to yell at him, cheer him, and engage in the kind of debate that was a feature of American democracy before the United States was even a nation.

Until a single shot rang out.

When the left told us it didn’t believe in free speech, we should have listened.

(End of Part 1)

1

How a 70-year-old man figured this out is one of the great and to my mind still unexplained political mysteries of all time.

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Social Media Is Navigating Its Sectarian Phase | The New Yorker

1 Share
  • Who/What/When/Where/Why: A comparison of platform cultures on Bluesky, X, and Threads after Elon Musk’s 2022 Twitter acquisition and Donald Trump’s reëlection, explaining how politics and platform structure shape user behavior and platform choice.
  • Bluesky culture: Characterized by recursive, reactionary scolding, slower pace, and niche internecine disputes; X is referred to euphemistically as “the other place.”
  • User composition: Many Bluesky users do not post in English or engage with U.S. politics, yet the platform has developed an identity as a liberal haven post-Musk/Trump.
  • Platform politicization: Which social network people use increasingly signals personal politics, making platform choice more fraught and networks more sectarian.
  • X under Musk: Changes include moderation layoffs, incentives favoring certain influencers, feed tweaks to promote Grok and xAI, and a formal merger into xAI, producing a more overtly politicized environment.
  • Structural differences: Bluesky’s decentralized architecture rewards niche fights and slower interactions; X’s design encourages follower armies, engagement-driven trolling, and rapid amplification.
  • Attention dynamics: Some left-leaning journalists and editors are returning to X for greater reach and the platform’s attention-driven amplification, which requires less effort to reach audiences.
  • Platform trajectories: Text-dominated networks are declining—Bluesky’s daily posters fell after a January peak (yet remain ~4× last year), X lost ~10% of its European user base (~11 million) since August 2024, and Threads has grown via Instagram support but produced no major cultural moments.

Silver is not completely wrong; there is indeed a culture of recursive, reactionary scolding on Bluesky that makes it less fun than it could be. (On the platform, X is euphemistically referred to as “the other place.”) It’s impossible to generalize about the entire population of the site—many of Bluesky’s users do not post in English and do not engage with American politics—yet it has developed an identity as a haven for liberals in the aftermath of Elon Musk’s acquisition of Twitter and Donald Trump’s reëlection as President. More so now than five or ten years ago, which social network you use is a byword for your personal politics, and you can’t use a particular platform without agreeing with, or at least tolerating, its ambient ideology. Social networks, particularly the text-driven platforms that host political debate, have grown more sectarian as politics, in turn, have become more online; if you’re active on one network, you might, like Silver, get blamed for what you say there on another. Thus, people’s choice of which platform to use has become more fraught. A decade ago, it made sense for users to toggle among different social networks that offered different content formats. Now that every platform offers a similar admixture of text, images, and video, choosing where to spend your time feels more like picking your preferred flavor of brain poison.

Silver casts Bluesky as “a peculiar online neighborhood that someone wouldn’t encounter unless she takes a wrong turn.” But the dynamics of X, where Silver posts more or less daily, are far more politicized by design. In the twenty-tens, Twitter hosted a wide range of political viewpoints; it was known as much for distributing activists’ documentation of protests as for serving as Trump’s bully pulpit during his first Administration. After Musk acquired the site, in 2022, he laid off its moderation staff, incentivized particular forms of posting (often from Musk’s favored right-wing influencers), and tweaked users’ feeds toward promoting Grok and xAI, Musk’s artificial-intelligence business. (X was formally merged into xAI this year.) No one can credibly claim that X under Musk is trying to be neutral, or build a better political arena, as Bluesky, whatever its flaws, has earnestly tried to do. The politics of social platforms is determined not just by the opinions of its users but by the underlying structure of the network. Bluesky, part of the decentralized internet, is slower paced and caters to niche interests, rewarding internecine fights over minutiae, whereas X is deliberately chaotic, encouraging the gathering of follower-armies and ideological insult-comedy for an audience that may be largely made up of bots. X-ism, as we might call it, involves the race for engagement at all costs and trolling as praxis, and it is currently winning out.

I suspect that left-leaning journalists and editors are migrating back to X not because of the ideological tenor of Bluesky but because they miss the attention arms race. If X-ism is preferable to Blueskyism, it’s because X users don’t have to work as hard to reach people. On the internet, a bigger megaphone is still considered a better one. Yet parsing the distinctions between Bluesky and X has a burgeoning air of futility. The truth is that text-dominated social networks are largely decaying across the board. According to Bluesky’s open-source data, the number of daily posters on the platform has been declining each month since its peak in January of this year, following Trump’s Inauguration (though it is still four times as large as it was last year). X has been forced to release some of its internal statistics owing to new technology regulation in the European Union; the data, from earlier this year, show that the site has lost around ten per cent of its European user base, around eleven million accounts, since August of 2024. Meta’s Threads app is growing, thanks to its ongoing support from Instagram, but since its launch, in 2023, it has not, at least by my recollection, generated a single major cultural moment or meme.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

SpaceX’s lesson from last Starship flight? “We need to seal the tiles.” - Ars Technica

1 Share
  • Who/What/When/Where/Why: Bill Gerstenmaier and SpaceX presented findings two weeks after the Aug 26 Starship test flight (launched from Starbase, Texas) at the AAS Glenn Symposium in Cleveland to diagnose heat‑shield issues, identify improvements, and plan the next flight.
  • Presenter: Bill Gerstenmaier, SpaceX executive responsible for build and flight reliability, delivered the findings Monday at the symposium.
  • Launch Details: The rocket lifted off Aug 26 from Starbase, Texas; it was the 10th full‑scale test of the Super Heavy booster and Starship upper stage, forming the world's largest rocket.
  • Flight Objectives: Primary goals were to resolve prior propulsion and propellant system problems and to collect data on Starship's heat‑shield tiles during reentry.
  • General Outcome: Gerstenmaier reported that "things went extremely well," with engineers diagnosing heat‑shield issues, identifying improvements, and developing a preliminary plan for the next flight.
  • Splashdown: About an hour after liftoff Starship guided itself to a controlled splashdown in the Indian Ocean northwest of Australia, landing within ~10 feet (3 m) of its targeted point near an instrumented buoy.
  • Reentry Sequence and Damage: Video showed a belly‑first descent, relight of three of six Raptor engines to flip upright before ocean contact, and visible damage to the rear end and flaps plus a rusty orange streak along the side.
  • Heat‑Shield Findings: Elon Musk attributed the discoloration to oxidation of metallic test tiles versus ceramic tiles; nearly all tiles remained on the vehicle from launch through landing, yielding useful durability data.

It has been two weeks since SpaceX's last Starship test flight, and engineers have diagnosed issues with its heat shield, identified improvements, and developed a preliminary plan for the next time the ship heads into space.

Bill Gerstenmaier, a SpaceX executive in charge of build and flight reliability, presented the findings Monday at the American Astronautical Society's Glenn Space Technology Symposium in Cleveland.

The rocket lifted off on August 26 from SpaceX's launch pad in Starbase, Texas, just north of the US-Mexico border. It was the 10th full-scale test flight of SpaceX's Super Heavy booster and Starship upper stage, combining to form the world's largest rocket.

There were a couple of overarching objectives on the August 26 test flight. SpaceX needed to overcome problems with Starship's propulsion and propellant systems that plagued three previous test flights. Then, engineers were hungry for data on Starship's heat shield, an array of thousands of tiles covering the ship's belly as it streaks through the atmosphere during reentry.

"Things went extremely well," Gerstenmaier said.

A little more than an hour after liftoff, the Starship guided itself to a controlled splashdown in the Indian Ocean northwest of Australia. The ship came within 10 feet (3 meters) of its targeted splashdown point, near an inflatable buoy in position to record its final descent.

Video from the buoy and a drone hovering nearby showed Starship coming in for splashdown, initially falling belly first before lighting three of its six Raptor engines to flip upright moments before settling into the ocean. But the ship had some battle scars. There was some visible damage to its rear end and flaps, and most notably, a rusty orange hue emblazoned down the side of the 171-foot-tall (52-meter) vehicle.

SpaceX founder Elon Musk said the discoloration was caused by the oxidation of metallic heat shield tiles installed to test their durability and performance in comparison to the ship's array of ceramic tiles. Unlike previous Starship flights, Musk said nearly all of the tiles remained on the vehicle from launch through landing.

Read the whole story
bogorad
3 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Anthropic’s Settlement Unleashes the Russian Winter - WSJ

1 Share
  • Who/What/When/Where/Why: Anthropic and a group of authors settled a 2024 U.S. copyright suit last week for $1.5 billion over alleged unauthorized use of books to train AI models, with the settlement aimed at resolving claims and imposing damages and data-removal terms.
  • Settlement amount: $1.5 billion total.
  • Per-book valuation: The settlement sets the value of a single infringed book at $3,000.
  • Prior court ruling: Judge William Alsup had earlier held that training on lawfully purchased books was transformative and therefore fair use.
  • Data and purge obligations: Anthropic agreed to destroy the datasets at issue and purge the allegedly illicit content from its systems.
  • Strategic framing: The move is presented as a strategic retreat likened to General Kutuzov’s tactics—sacrificing a battle to reshape the broader litigation landscape.
  • Implications for OpenAI: The $3,000 per-book metric and Anthropic’s purge heighten potential liability for OpenAI in separate suits, which also involve allegations of illicit outputs and evidence-spoliation issues and could make OpenAI the anchor defendant in multidistrict litigation.
  • Anthropic’s future exposure: The class settlement covers virtually every registered work in its ingested datasets and likely forecloses most follow-on suits against Anthropic, even as broader litigation against generative AI is expected to grow.

You may also like

Up Next

CheckboxEmbed code copied to clipboard

[Share on FacebookFacebook](https://www.facebook.com/sharer/sharer.php?u=https://www.wsj.com/video/series/journal-editorial-report/wsj-opinion-hits-and-misses-of-the-week/EC802186-5B22-4C1D-8275-B5A7EEFE8206&t=WSJ Opinion: Hits and Misses of the Week "Share on Facebook")

[Share on TwitterTwitter](https://twitter.com/intent/tweet?url=https://www.wsj.com/video/series/journal-editorial-report/wsj-opinion-hits-and-misses-of-the-week/EC802186-5B22-4C1D-8275-B5A7EEFE8206&text=WSJ Opinion: Hits and Misses of the Week "Share on Twitter")

Your browser does not support HTML5 video.

0:10

Paused

0:01 / 1:29

Journal Editorial Report: The week’s best and worst from Allysia Finley, Kyle Peterson and Kim Strassel.

The AI company Anthropic last week did something unexpected. A group of authors had sued the company in 2024, accusing it of using their books to train its AI models without permission. Just as the tide seemed to be turning in the company’s favor in the suit, Anthropic, best known for its Claude chatbot, settled for a landmark $1.5 billion. It agreed to destroy the data used to train its models and set the value of a single infringed book at $3,000. No other AI company being sued by authors has offered anything close.

Anthropic made this move immediately after it secured a favorable ruling. Judge William Alsup had held that training on lawfully purchased books was transformative, and therefore fair use. The plaintiffs’ theory had narrowed from a copyright suit to one on how Anthropic obtained the books, which plaintiffs alleged were from pirated libraries. Anthropic looked poised, if not to win outright, to wear down the plaintiffs through a long and costly battle. So why settle?

A strategy worthy of Gen. Mikhail Kutuzov—that’s why. Like the commander of Russia’s forces amid Napoleon’s failed invasion in 1812, Anthropic seems to be choosing to rewrite the battlefield in its favor rather than win a single battle, even one that’s seemingly pivotal.

After Kutuzov’s surrender of Moscow, the French emperor believed himself triumphant. The Russian army had retreated. The city had been abandoned to French forces. The campaign, it seemed, was complete. But it had only begun.

Kutuzov understood something his adversary didn’t. The purpose of war isn’t to capture cities; it’s to outlast conditions. By yielding Moscow, he bought time. By burning it, he denied the French an ounce of comfort. By drawing Napoleon deep into the continent, he consigned the Grand Armée to a fate no strategy could overcome. Kutuzov handed the war over to what would have been a troublesome foe had he stayed to fight: the Russian winter.

Like Kutuzov, Anthropic shaped its loss to trigger a larger force—entrepreneurial litigation now armed with a dangerously high price—and left its corporate rival, OpenAI, squarely in the crosshairs. Anthropic has burned and abandoned Moscow and now waits for the cold to set in.

Anthropic built its trap from arithmetic. Throughout the litigation, the price per pirated book was theoretical. No longer: It is now $3,000. That number can be applied across tens or hundreds of thousands of works in lawsuits that are still under way.

This math will be ruinous to OpenAI, which faces litigation in New York and other states. It was already in a weaker position than Anthropic had been in its suit. The facts are murkier. OpenAI’s cases involve more-complex sourcing for the training data under scrutiny and procedural questions about spoliation, such as whether OpenAI’s deletion of training and output logs constitutes the wrongful destruction of evidence. And while Anthropic settled only regarding alleged illicit inputs, OpenAI is accused of illicit outputs as well: Authors allege that GPT models regurgitate copyrighted passages verbatim. OpenAI denies any wrongdoing.

If the price of allegedly misusing a single book in training is $3,000, the cost of doing so and echoing an author’s voice line for line will surely be higher. Perhaps even more dangerous for OpenAI is Anthropic’s agreement to purge the allegedly illicit content. Anthropic can apparently sever the data over which authors sued from its system. But OpenAI likely can’t purge the data it got from the massive data sets concerned in its lawsuits, some of which have been deleted and are therefore even harder to entangle from the rest of the training data.

OpenAI’s legal problems are also likely to expand. It’ll probably be the anchor defendant in the mass-tort multidistrict litigation that Anthropic has now made inevitable. While an AI-company victory in Anthropic’s suit may have scared off many potential plaintiffs, the settlement creates an enticing financial incentive to sue. Copyright suits have gone from a niche area to a potential gold rush for lawyers.

But Anthropic itself is likely safe from further lawsuits. The class it settled with includes virtually every author whose registered works appeared in the datasets the company’s models ingested. The settlement forecloses almost all viable follow-on suits. Unless Anthropic uses new infringing material or outputs something egregious, its litigation exposure is over.

The legal campaign against generative AI will still grow. But Anthropic has retreated from the fight, removing itself from the mass-tort model. It’s OpenAI that now faces a bitter cold.

Mr. Klapper is founder and CEO of Learned Hand AI.

PHOTO: GABBY JONES/BLOOMBERG NEWS

Read the whole story
bogorad
4 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Will AI Choke Off the Supply of Knowledge? - WSJ

1 Share
  • Who/What/When/Where/Why: In January OpenAI CEO Sam Altman and CPO Kevin Weil demonstrated ChatGPT’s upcoming “deep research” app to a Beltway audience, showing it could draft a memo preparing a fictional senator for Albert Einstein’s confirmation as energy secretary to showcase the model’s research capabilities.
  • LLM capability: Large language models (ChatGPT, Google Gemini, Anthropic Claude) excel at locating, synthesizing and connecting existing knowledge but do not generate new knowledge.
  • Human knowledge creation: Humans often produce novel insights when answering questions driven by incentives (salary, fame, tenure, clicks, curiosity), incentives that may diminish if LLMs dominate question-answering.
  • Stack Overflow impact: A study found Stack Overflow questions fell 25% six months after ChatGPT’s introduction relative to similar sites; as of this month the site’s questions are reported to be down more than 90%.
  • Wikipedia and web traffic: Research by Hannah Li and co-authors found declines in views and edits for Wikipedia pages similar to ChatGPT outputs after ChatGPT’s launch, and Google’s AI answers have sharply reduced referral traffic to publishers.
  • Antitrust and search: Judge Amit Mehta ruled Google had an illegal monopoly in search but imposed light penalties, noting AI could present a genuine challenge to Google’s dominance.
  • Risks of feedback loop: Columbia’s Hannah Li warned that training LLMs on other LLM outputs can degrade model quality (“model collapse”), likened to passive investing reducing price discovery in markets.
  • Cognitive engagement and personal research: An MIT study found essay writers using LLMs showed less brain engagement than those using search or memory, and a separate personal search into Einstein yielded diverse historical details beyond the demo’s summary.

Illustration of a teal dictionary with a rectangular hole in the center.

Illustration: Rachel Mendelson/WSJ, iStock

In January, OpenAI Chief Executive Sam Altman and Chief Product Officer Kevin Weil hosted a demonstration of ChatGPT’s soon-to-be-released “deep research” application. A Beltway audience watched as Weil asked ChatGPT to prepare a memo briefing a fictional senator for the confirmation of Albert Einstein to be energy secretary.

ChatGPT soon produced a thorough profile of Einstein, listing his technical and engineering accomplishments, leadership style, strengths (“a globally respected scientist-statesman”) and weaknesses (“never managed a large organization”) plus questions the senator could ask (“You have been an outspoken voice on nuclear issues since WWII. As Energy Secretary, how will you ensure the safety of nuclear power plants and uphold U.S. commitments to nuclear nonproliferation?”).

The benefits of such impressive, and now routine, capabilities, were obvious: enormous savings of time and effort. Of course, there were potential costs: How many jobs could researchers, writers and other knowledge workers lose to artificial intelligence?

I wondered about a different cost: How much knowledge will be lost to AI? Large language models (LLMs) such as ChatGPT, Google Gemini and Anthropic’s Claude excel at locating, synthesizing and connecting knowledge. They don’t add to the stock of knowledge.

By contrast, when humans answer questions, such as whether Einstein should be energy secretary, they often pursue novel avenues of inquiry, creating new knowledge and insight as they go. They do this for a variety of reasons: salary, wealth, fame, tenure, “likes,” clicks, curiosity.

If LLMs come to dominate the business of answering questions, those incentives shrivel. There is little reward to creating knowledge that then gets puréed in a large language blender.

Consider the fate of Stack Overflow, a website where software developers ask and answer questions, becoming both a wellspring and repository for knowledge. 

But then developers started putting their questions to ChatGPT. Six months after its introduction in November 2022, the number of questions on Stack Overflow had fallen 25% relative to similar Chinese and Russian language sites where ChatGPT wasn’t an alternative, according to a study by Johannes Wachs of Corvinus University of Budapest and two co-authors.

The drop was the same regardless of quality, based on peer feedback, refuting predictions that AI would displace only low-value research.

As of this month, the number of questions is down more than 90%. Why should anyone other than Stack Overflow’s owners care? Because, as tech writer Nick Hodges explained in InfoWorld, “Stack Overflow provides much of the knowledge that is embedded in AI coding tools, but the more developers rely on AI coding tools the less likely they will participate in Stack Overflow, the site that produces that knowledge.”

Stack Overflow may be an extreme case. A different study found no similar decline on Reddit.

But there are signs of similar effects elsewhere. Many LLMs are trained on Wikipedia, a repository of knowledge compiled and curated by humans. Columbia University business professor Hannah Li and five co-authors found that between the year before and the year after ChatGPT’s launch, views fell for Wikipedia pages most similar to what ChatGPT could produce. Edits also dropped, a potential sign of less incentive to contribute, although the data were inconclusive.

Meanwhile, as Google has enabled users to answer queries through AI without clicking on links, web publishers large and small have seen referral traffic from search plummet.

Alphabet CEO Sundar Pichai in May. Google AI is reducing search referral traffic to publishers’ websites. PHOTO: JEFF CHIU/ASSOCIATED PRESS

Internet search itself is at risk. Last year, District Court Judge Amit Mehta found Google, a unit of Alphabet, had an illegal monopoly in search. But the dramatic growth of AI prompted him to impose surprisingly light penalties last week. “For the first time in over a decade, there is a genuine prospect that a product could emerge that will present a meaningful challenge to Google’s market dominance,” Mehta wrote.

If LLM output comes to dominate the web, the web will become, well, dumber. Columbia’s Li said in an interview: “What happens when we train LLMs on other LLM outputs? The overall outcomes get worse. The models get worse. This is what they call model collapse.”

There is a parallel in what index funds and other passive strategies have done to the stock market. They don’t do research and price discovery (the process of negotiation that reveals an asset’s value). Instead, they free ride on the research and price discovery of active investors. In other words, they exploit market efficiency without contributing to it. In the process, they are squeezing out active investing, leaving a market increasingly dominated by algorithms trading against each other.  

These are, I’ll admit, dystopian scenarios. I could tell a different story of how AI will help scholars discover connections between otherwise disparate bits of knowledge across the web. Joshua Gans, a University of Toronto economist who has written extensively on AI, thinks that so long as new knowledge has value, it will find a way to be created. He says when AI insights are incremental, humans will pivot to more truly novel research.

Maybe. But instead of pivoting, what if humans lose interest in learning altogether? Reliance on AI can cause critical thinking to atrophy, just as reliance on GPS weakens spatial memory. A study by Nataliya Kosmyna at Massachusetts Institute of Technology and seven co-authors asked three groups of subjects to write essays, one using an LLM, one using internet search, and one just their brains. Scans later showed the LLM group had the least engagement across brain regions such as for memory recall and executive functioning; the brain-only group had the most.

Mental engagement, the authors argue, is enhanced by “novelty, encountering new or unexpected content.” That resonates. Dissatisfied with OpenAI’s demo, I searched the web for biographies and writings of Einstein.

I learned his father’s company made electrical equipment based on direct current, then went bust when alternating current triumphed; that he was outspoken in support for civil rights in the U.S. and against oppression of Jews in Germany, for which the Nazis put a price on his head; that during the McCarthyite fervor of the 1940s and 1950s he was called a foreign-born agitator spreading communism; that he wasn’t a communist but was a socialist. In a 1949 essay for a socialist journal, he answered a question I often ponder: how economics differs from the physical sciences, like astronomy: “economic phenomena are often affected by many factors which are very hard to evaluate separately.”

I have no idea if any of that bore on his qualifications to be energy secretary. I could have spent the time on more productive work. But then, acquiring new knowledge has never felt like work.

Write to Greg Ip at greg.ip@wsj.com

Read the whole story
bogorad
6 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Google Fined $3.5 Billion by EU Over Ad-Tech Business  - WSJ

1 Share
  • Who/What/When/Where/Why: The European Union fined Google €2.95 billion on a recent Friday in Brussels for abusing dominance in ad‑tech to favor its ad exchange, a move that drew President Trump’s ire amid sensitive U.S.–EU trade talks.
  • Size and significance: The €2.95 billion (almost $3.5 billion) penalty is the EU’s second‑largest antitrust fine after a 2018 sanction and signals strong enforcement intent.
  • Allegations and remedies: The Commission found Google abused its role in buying and selling digital ads across third‑party sites/apps to drive business to its ad exchange, gave 60 days for remedy proposals, and preliminarily indicated divestiture may be required.
  • U.S. political reaction: President Trump called the action discriminatory and threatened a Section 301 trade investigation to “nullify the unfair penalties,” framing the dispute within ongoing tariff and trade negotiations.
  • Parallel legal actions: The EU case follows a four‑year probe running parallel to a Justice Department antitrust case; a U.S. judge found Google created a monopoly and DOJ has urged selling parts of its ad‑tech business.
  • Google’s response: Google said it will appeal, called the decision wrong and unjustified, and asserted its ad‑tech services face increasing alternatives, according to Lee‑Anne Mulholland.
  • Economic context: Google’s ad‑tech business accounted for roughly 10% of its $71 billion advertising revenue in Q2; observers note fines alone often represent a small fraction of big‑tech revenues.
  • Timing, procedure and stakeholders: The EU delayed an initial Sept. 1 announcement after trade chief Maroš Šefčovič raised concerns; the Commission said internal procedures were completed, antitrust chief Teresa Ribera framed the decision as reaffirming enforcement, and the European Publishers Council urged stronger measures beyond a fine.

Your browser does not support HTML5 video.

Explained: Trump's EU Trade Deal and What Comes Next

Explained: Trump's EU Trade Deal and What Comes NextPlay video: Explained: Trump's EU Trade Deal and What Comes Next

Keep hovering to play

In July, WSJ reporter Kim Mackrael explained Trump’s EU trade deal and what could come next. Photo: Andrew Harnik/Getty Images

BRUSSELS—The European Union fined Google nearly $3.5 billion for abusing the dominance of its advertising-technology tools, a move that immediately stirred the ire of President Trump amid already delicate trade talks. 

The fine is the EU’s second-largest antitrust penalty ever, after another Google fine in 2018, and ramps up the threat on both sides of the Atlantic to one of the search giant’s bigger businesses.

The action prompted a swift response from Trump, who called the move discriminatory and threatened to open a Section 301 trade investigation “to nullify the unfair penalties” that he said are being charged to American companies. 

With the EU and U.S. in the middle of sensitive trade discussions, antitrust officials in Brussels had to overcome internal concern from the bloc’s trade chief before issuing the fine, people familiar with the matter said.

The European Commission, the EU’s antitrust regulator, said Friday that Google has abused its dominant role in the buying and selling of digital ads across third-party websites and apps to drive business to its own advertising auction house, known as an ad exchange. 

The bloc gave Google 60 days to propose how to resolve “inherent conflicts of interest,” but added that its preliminary view remains that Google must divest parts of that business. Fines alone haven’t tended to change big tech companies’ business practices because even large penalties represent a fraction of their overall revenue.

Still, the size of Friday’s fine suggests the EU isn’t giving up on its efforts to enforce tough measures against U.S. tech companies, despite pressure from the Trump administration, which has criticized European digital rules.

Issuing a fine in the billions carries political significance, said William Kovacic, a law professor at George Washington University. “It’s a way of saying we’re not going to back off,” he said.

The decision “reaffirms the EU’s unequivocal commitment to competition enforcement,” the bloc’s antitrust chief, Teresa Ribera, said Friday.

Though largely invisible to internet users, Google’s ad-tech tools underpin much of the buying and selling of digital ads that help fund websites and apps across the internet. That line of business, while a declining share of Google’s top line, is still a cash cow, bringing in roughly 10% of Google’s $71 billion in advertising revenue in the second quarter. 

The EU fine of €2.95 billion, equivalent to almost $3.5 billion, follows a four-year-long investigation that has proceeded on a parallel track to an antitrust case brought by the Justice Department.

Google’s ad-tech tools are used in much of the buying and selling of digital ads on websites and apps across the internet. PHOTO: DAVID PAUL MORRIS/BLOOMBERG NEWS

A U.S. federal judge in April found that Google had created a monopoly that allowed it to control parts of the online-advertising industry. In May, the Justice Department argued Google should be forced to sell off two of its ad-tech businesses to address those antitrust issues.

Both cases are likely to lead to protracted legal battles.

Google said it would appeal the EU decision and fine, which it described as wrong and unjustified. “There’s nothing anticompetitive in providing services for ad buyers and sellers, and there are more alternatives to our services than ever before,” said Lee-Anne Mulholland, Google’s global head of regulatory affairs.

The ad-tech cases against Google are separate from the case where a U.S. federal judge earlier this week imposed lighter penalties on Google than regulators had sought in a case that could have seen the company forced to divest its Chrome browser. 

The EU had initially planned to announce its fine against Google on Sept. 1 but delayed the decision at the last minute after the bloc’s trade chief raised concerns, the people familiar with the matter said. The delay appeared to be related to the EU’s discussions with the U.S. over trade, they added.

The U.S. and EU reached a political agreement on tariffs in July but have continued to discuss the details of the deal, and the EU is waiting for the U.S. to follow through on a pledge to lower auto tariffs to 15%. President Trump also threatened additional tariffs or trade restrictions last month against countries with digital taxes or other rules that could hurt U.S. tech companies.

The bloc’s trade chief, Maroš Šefčovič, on Wednesday declined to comment on what he said were confidential and internal procedures when asked about the timing of the Google announcement. But he said that he supported the Commission’s investigation and had been in regular contact with Ribera, the EU’s antitrust chief.

“This is a complex case that requires a thorough assessment,” Šefčovič said on Wednesday. “I can assure you that my priority is, and always would be, the European interest.”

A spokeswoman for the European Commission said Friday that the decision was adopted once the EU’s internal procedures were completed.

The delay to the much-anticipated Google announcement was earlier reported by legal publication MLex.

The European Publishers Council, which filed a complaint against Google’s ad-tech practices in 2022, said it wanted to see measures that will force the company to change its behavior. 

“A fine will not fix Google’s abuse of its ad tech. Without strong and decisive enforcement, Google will simply write this off as a cost of business,” the Council said.

Write to Kim Mackrael at kim.mackrael@wsj.com and Sam Schechner at Sam.Schechner@wsj.com

Read the whole story
bogorad
6 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories