Strategic Initiatives
12189 stories
·
45 followers

Alibaba Anonymously Launches New AI Video Model — The Information

1 Share
  • Model Origin: Alibaba Group released the HappyHorse-1.0 AI video generation model under an anonymous identity.
  • Market Competition: The development serves as a strategic move to contest ByteDance’s dominance in AI video generation and cloud services.
  • Performance Metrics: HappyHorse-1.0 currently holds the top position on the Artificial Analysis leaderboard for text-to-video and image-to-video capabilities.
  • Commercial Integration: Alibaba plans to integrate the technology into its cloud computing platform for access by enterprise-level clients.
  • Strategic Deployment: Anonymous model releases followed by official branding represent a recurring marketing tactic used by Chinese technology firms to build user momentum.

Alibaba Group has anonymously released a new AI video generation model called HappyHorse-1.0, which rose to the top of an AI model leaderboard and attracted a lot of attention on social media, according to two people with knowledge of Alibaba’s involvement.

The new video model is the Chinese tech giant’s latest effort to step up its fight against its major rival ByteDance in all areas of AI, from models to applications to AI cloud services. Alibaba hasn’t publicly said it is the developer of the HappyHorse model. But Alibaba’s cloud computing unit is preparing to make the new model available to its enterprise customers, according to one of the people.

HappyHorse-1.0 is currently ranked No. 1 in the text-to-video and image-to-video categories on a leaderboard compiled by Artificial Analysis, beating ByteDance’s Seedance 2.0 model, which is ranked No. 2.

ByteDance, which owns TikTok, is a global leader in AI video generation. The company’s Seedance 2.0 model, released in China in February, wowed the world with its ability to create hyperrealistic videos but also faced copyright disputes with Hollywood studios.

Releasing new AI models anonymously under codenames first—and making official announcements later—has become a popular tactic among Chinese companies to generate more excitement among users. Last month, smartphone maker Xiaomi released its new large language model, MiMo-V2-Pro, initially under the codename Hunter Alpha on the OpenRouter AI model marketplace. In February, an anonymous new AI model on OpenRouter called Pony Alpha turned out to be Chinese AI firm Zhipu’s new GLM-5 model.

Read the whole story
bogorad
2 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

White-collar industries bet on a secret weapon against AI: trust

2 Shares
  • Technological Impact: Artificial intelligence agents are being deployed to automate complex white-collar tasks, impacting industries such as law, finance, and cyber security.
  • Trust Requirements: High-stakes sectors prioritize authoritative, traceable, and accountable outcomes, viewing these standards as critical differentiators over mere processing speed.
  • Industry Exposure: Market vulnerability varies significantly, with unregulated sectors or those reliant on public data facing higher disruption risks than those requiring proprietary data and domain expertise.
  • Enterprise Integration: Existing software providers are incorporating AI plug-ins to enhance efficiency, though observers note these cannot replace fundamental systems of record or established regulatory frameworks.
  • Financial Evaluation: Public market sentiment remains focused on which firms can effectively capture value, with software companies under pressure to demonstrate clear return on investment through upselling.
  • Cyber Security Dynamics: While AI models demonstrate improved capability in identifying code vulnerabilities, human expertise remains essential for broad deterministic reasoning and handling active adversarial threats.
  • Operational Limitations: Current AI tools are described as ineffective at high-level judgment and prone to hallucinations, rendering them unsuitable for scenarios requiring absolute precision and human accountability.
  • Strategic Collaboration: AI developers acknowledge professional limitations, positioning their software as complementary tools to boost efficiency rather than as replacements for comprehensive enterprise platforms.

AI “agents” are threatening to transform white-collar work, offering cheaper, faster software that can perform the tasks of lawyers, bankers and accountants.

The reality is more complicated.

Since the start of the year, AI companies such as Anthropic have released tools designed to automate tasks across professions, prompting a sharp reassessment of stocks from cyber security to financial services.

But interviews with a dozen chief executives, investors and analysts across sectors suggest not all parts of the market are equally exposed. In industries where accuracy, accountability and regulation matter, trust is emerging as the critical faultline.

“Fiduciary professions like law, tax, audit and compliance require more than . . . accelerating everyday knowledge work,” said Steve Hasker, chief executive of Thomson Reuters.

“What ultimately matters is whether the output is authoritative, traceable and accountable to professional standards. In high-stakes environments, speed alone isn’t the differentiator. Trust is.”

Pedestrians in business attire walk along Park Avenue in Midtown Manhattan, with office buildings and an American flag in the background.

Anthropic released a slew of tools this year that allow professionals from lawyers to bankers to infuse their day-to-day tasks with AI capabilities © Jeenah Moon/Bloomberg

Despite bullishness among some business leaders, there is broad agreement that companies must evolve in the face of products like Anthropic’s Claude Cowork or face extinction.

Customer services software companies, as well as sectors that are unregulated or use public data, such as data aggregators or marketing content firms, are all viewed as under threat.

“It is indisputable that the technology is very powerful,” said Victor Englesson, a partner at Nordic-headquartered investment firm EQT Partners, who focuses on software and technology investments.

“The key question, which the public market has voted on, is who is best positioned to capture that value. At the core, the size of the pie has grown. There will be winners and losers, but it will come down to execution.” 

Investors and executives said they were drawing up war plans to grow and reallocate capital, after Anthropic released a slew of tools this year that allow professionals from lawyers to bankers to infuse their day-to-day tasks with AI capabilities. 

The new products are an extension of the company’s Claude Code, which uses large language models to generate code. Anthropic’s own finance department is using Cowork — a version of Claude Code for non-technical professionals — as it prepares for an initial public offering as early as this year. 

“Our finance team uses this for a lot of our growth projections, so they’re constantly understanding how much capacity we’re using and which customer segments are growing and which we should invest more in,” said Catherine Wu, head of product for Claude Code at Anthropic.

Customisable AI “plug-ins” help automate company-specific software, allowing AI to help with contract reviews, sift job applications, visualise ideas or perform financial analysis and modelling.

When Anthropic first launched these software-focused tools in February, a stock market sell-off dubbed the “SaaS-pocalpyse” indiscriminately hit software businesses.

After the initial sell-off, Anthropic unveiled an updated set of tools designed to work with many of those same companies, offering a brief reprieve to their embattled share prices.

Thomson Reuters launched CoCounsel Legal, its AI tool powered by Claude’s latest model, to perform complex tasks from legal research, document analysis and drafting, using its proprietary content.

Hasker said AI models on their own were not necessarily built for fiduciary work, where “professionals are responsible for outcomes and errors carry real consequences”. 

He added that “almost right isn’t good enough, whether in a courtroom, a boardroom or an audit”, adding that grounding systems in domain expertise would become essential in coming years. 

Private investors focused on software agree that AI plug-ins can be bolted on to existing systems to improve efficiency and replace some human roles. But they cannot fully replace core enterprise software built on years of accumulated data across a company’s supply chain and operations.

“[AI] plug-ins . . . can’t replace proprietary data, systems of records, regulatory context, network effects [and] distribution . . . that’s my defensibility,” said Jean-Baptiste Brian, co-chief executive of private equity firm Hg Capital, which invests primarily in software businesses. 

Others argue that even if AI does not entirely kill enterprise software, slowing growth or marginal hits to revenues will leave Wall Street investors heavily exposed. 

“There has been an issue in terms of top-line growth for a long time and that has been amplified by AI,” said Englesson. To prove the public markets wrong, he said, software groups must show that customers are willing to pay for new AI capabilities by upselling and bringing in new customers to show a clear return on investment. 

The Claude logo is displayed on a laptop screen, with a coffee mug and a plant nearby on the desk.

Anthropic’s own finance department is using Cowork — a version of Claude Code for non-technical professionals © Gabby Jones/Bloomberg

In February, Anthropic released a preview of a cyber security tool, triggering a sharp sell-off across the industry, affecting companies ranging from Palo Alto Networks, CrowdStrike, Okta and SentinelOne. 

Dan Schiappa, a 20-year veteran of the cyber security industry and president of technology and services at Arctic Wolf, acknowledged that the latest Claude releases were a “vast . . . step function improvement” in terms of cyber security competence. 

“It’s a bit of a glimpse into the future,” Schiappa said. “If you’re in that market of code vulnerability finding, like [start-up] Snyk for example, it is meaningful for your business.”

This week, Anthropic previewed a new model, Mythos, which it said had found “thousands of high-severity vulnerabilities” in every major operating system and web browser.

Mythos’ capabilities could “reshape cyber security”, Anthropic said. It is working with CrowdStrike and Palo Alto Networks, as well as several Big Tech groups, to address the vulnerabilities.

However, Schiappa said that cyber security expertise is far broader than simply spotting vulnerabilities in code.

“AI is great at very directed tasks, it’s not quite great at broad deterministic reasoning yet. It can make a recommendation but it’s not sure enough to take action. That’s where humans come in,” he said. “They have . . . done hand-to-hand combat with adversaries. They know it, they have seen it before, intuitively.”

He added: “We are an industry where we cannot take chances. People’s businesses are at stake.”

In financial services, Anthropic has launched a cluster of plug-ins focusing on subdivisions such as investment banking, private equity, equity research and wealth management. 

The company has said this would enable AI to perform tasks like reviewing large sets of transaction documents, building competitor analyses, parsing earnings transcripts and updating financial models in real time by extracting financial data and modelling scenarios. 

Investors who have used the tools say AI agents like Cowork are good at gathering and parsing unstructured data, and that they are using them for tasks like market research, deal sourcing and screening, and even drafting early-stage investment memos. 

However, Hg’s Brian says the technology remains “absolutely crap at judgment”, describing AI tools as “sycophantic”, or agreeing too readily with a user’s hypothesis. “It can’t do investing right now,” he said. 

Sam Jones, chief operating officer of Compare the Market, which helps customers compare prices for insurance and financial products, said the risk of AI supplanting companies like his was “a pretty key question for the industry at the moment.”

But Jones said companies like his are building a secure harness around AI models to ensure its outputs are accurate, transparent and verifiable as customers and regulators expect. 

“Plugging [AI] into an insurer, whilst technically possible, today generates pretty catastrophic consumer outcomes,” he said. “It can hallucinate on consent, it can hallucinate on terms and conditions, it can hallucinate on prices.”

Anthropic’s Wu emphasised that AI systems need existing software platforms in order to make work more efficient, not replace traditional players. 

“We’re not going to be best in class at collaboration [or] at managing all your sales deals, or all your designs. We would just never have that ability to assume all that functionality within Cowork,” she said. “Being an everything app . . . it feels overly ambitious for us to do.” 

Additional reporting by Alexandra Heal and Antoine Gara

The AI rollout is here - and it's messy | FT Working It

Read the whole story
bogorad
10 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Court Denies Anthropic Request to End Defense Department Punishment - WSJ

1 Share
  • Legal Setback: A federal appeals court denied Anthropic's request to overturn its designation as a supply-chain risk by the Department of Defense.
  • Security Priority: The court prioritized national security and military interests over the potential financial harm experienced by the private corporation.
  • Contractual Exclusion: The ruling maintains the company's exclusion from new government contracts and Pentagon systems during the ongoing legal dispute.
  • Transition Deadline: The administration has mandated that the Department of Defense phase out the use of Anthropic's artificial intelligence models within a six-month period.
  • Regulatory Conflict: The designation followed failed negotiations regarding the use of Anthropic's technology in fully autonomous weapons and domestic surveillance operations.
  • Judicial Inconsistency: Legal uncertainty persists as this appellate decision contrasts with an earlier injunction issued by a California court against broader federal bans.
  • Industry Compliance: Several competitors within the artificial intelligence sector have already agreed to meet the established governmental terms for defense contracts.
  • Operational Expansion: Despite the defense restrictions, the company is continuing to distribute new artificial intelligence models to various private infrastructure entities.

By

Amrith Ramkumar

and

Heather Somerville

April 8, 2026 7:37 pm ET

49


Anthropic headquarters in San Francisco.

Anthropic headquarters in San Francisco John G Mabanglo/EPA/Shutterstock

WASHINGTON—A federal appeals court on Wednesday denied Anthropic’s request for relief from the Defense Department declaring the company a supply-chain risk, complicating the legal battle between the U.S. government and one of the country’s leading artificial-intelligence companies.

While Anthropic has sustained financial harm from the Pentagon’s actions, the appeals court said that it didn’t feel strongly enough to override the government on a matter of national security. A separate court in California recently granted Anthropic a preliminary injunction to stop President Trump’s broader ban on all federal agencies using the company’s Claude models. 

Wednesday’s decision means that the Defense Department’s designation of Anthropic as a security threat stands, so the company will continue to be excluded from new contracts and Pentagon systems. In late February, Trump gave the Defense Department six months to transition away from Claude, which is being used in the war in Iran.

“On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict,” the appeals court said in its decision. 

The case will now proceed alongside the government’s appeal of the California court ruling, creating legal uncertainty for companies that work with both Anthropic and the government. Some companies have said that they would stop using Claude in their government work to avoid any potential issues.

“The D.C. Circuit’s denial will prolong ambiguities regarding whether political considerations can drive federal procurement,” said Matt Schruers, chief executive of the Computer & Communications Industry Association.

The Pentagon applied the supply-chain risk designation against Anthropic using two different statutes, one of which is under the jurisdiction of the Washington appeals court. 

Defense Secretary Pete Hegseth designated the company a supply-chain risk after attempts to renegotiate its contract with Anthropic fell apart. Anthropic had sought assurances that its models wouldn’t be used in fully autonomous weapons or for domestic surveillance. The Pentagon countered that such prohibitions were unnecessary because military policies or laws already restricted such uses, and pushed for an agreement in which the military could use the AI in all legal applications.

“​​Today’s D.C. Circuit stay allowing the government to designate Anthropic as a supply chain risk is a resounding victory for military readiness,” said Todd Blanche, the acting attorney general.

The supply-chain risk designation is normally used on companies from U.S. adversaries such as China that pose security threats, making the Defense Department’s move against Anthropic nearly unprecedented.

Some of Anthropic’s competitors including OpenAI and Elon Musk’s xAI have since agreed to the Pentagon’s terms.

The company has said it could lose revenue and investors because of the government’s actions, arguments that the appeals court acknowledged in its decision. 

“We’re grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply-chain designations were unlawful,” an Anthropic spokeswoman said. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

The risks posed by AI were illustrated again on Tuesday, when Anthropic said it is making a preview version of its new AI model available to about 50 companies and organizations that maintain critical infrastructure, including U.S. tech giants, before releasing it publicly. The company said it worried that releasing it now could cause widespread disruptions online.

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Amrith Ramkumar is a reporter for The Wall Street Journal in Washington covering tech and crypto policy. He previously covered clean energy and was a Journal markets reporter in New York who wrote about special-purpose acquisition companies, or SPACs, when SPAC mergers were a popular alternative to traditional initial public offerings. He also previously wrote about stocks and commodities, including battery metals such as lithium and cobalt.

Amrith joined the Journal as a markets intern after graduating from Duke in 2017.

Heather Somerville is a reporter at The Wall Street Journal in San Francisco covering technology and national security. Her articles explore the national-security implications of emerging technology, U.S. efforts to counter China's rise as a technology power, and the relationship between Silicon Valley and the U.S. defense complex.

Heather joined the Journal in 2019 to cover venture capital and technology companies. Before that, she wrote about venture capital and Silicon Valley startups for Reuters and the Mercury News. She was previously a reporter for the Fresno Bee and the Charlotte Observer and wrote about national security for outlets in Washington, D.C.


What to Read Next

EXCLUSIVE

[

Anthropic in Talks to Invest $200 Million in New Private-Equity Venture

](https://www.wsj.com/tech/ai/anthropic-in-talks-to-invest-200-million-in-new-private-equity-venture-30b78738?mod=WTRN_pos1)

[

General Atlantic, Blackstone, and Hellman & Friedman are among the private-equity firms in discussions to back the project.

](https://www.wsj.com/tech/ai/anthropic-in-talks-to-invest-200-million-in-new-private-equity-venture-30b78738?mod=WTRN_pos1)

Continue To Article


[

Tech, Media & Telecom Roundup: Market Talk

](https://www.wsj.com/business/tech-media-telecom-roundup-market-talk-60db43ce?mod=WTRN_pos2)

[

Find insight on Duolingo, cybersecurity funding and more in the latest Market Talks covering technology, media and telecom.

](https://www.wsj.com/business/tech-media-telecom-roundup-market-talk-60db43ce?mod=WTRN_pos2)

Continue To Article


[

Anthropic Set to Preview Powerful ‘Mythos’ Model to Ward Off AI Cyberthreats

](https://www.wsj.com/tech/ai/anthropic-set-to-preview-powerful-mythos-model-to-ward-off-ai-cyberthreats-75683cf5?mod=WTRN_pos4)

[

Anthropic is taking steps to arm some of the world’s biggest technology companies with tools to find and patch bugs in their hardware and software.

](https://www.wsj.com/tech/ai/anthropic-set-to-preview-powerful-mythos-model-to-ward-off-ai-cyberthreats-75683cf5?mod=WTRN_pos4)

Continue To Article


[

Pakistan Seeks Two-Week Extension to Trump’s Iran Deadline

](https://www.wsj.com/business/pakistan-seeks-two-week-extension-to-trumps-iran-deadline-72d1112d?mod=WTRN_pos5)

[

Plus, the White House keeps Kristi Noem’s $70 million jet, and retailers use Paul Anthony Kelly’s past modeling work to capitalize on his JFK Jr. role.

](https://www.wsj.com/business/pakistan-seeks-two-week-extension-to-trumps-iran-deadline-72d1112d?mod=WTRN_pos5)

Continue To Article


[

The Spiraling Cost of Making AI

](https://www.wsj.com/tech/ai/the-spiraling-cost-of-making-ai-0679bcea?mod=WTRN_pos6)

[

Plus, Anthropic seeks more business users while loading up on Broadcom chips

](https://www.wsj.com/tech/ai/the-spiraling-cost-of-making-ai-0679bcea?mod=WTRN_pos6)

Continue To Article


[

AI Giants Go on Charm Offensive to Avert Public Backlash

](https://www.wsj.com/tech/ai/ai-companies-public-relations-ae312d79?mod=WTRN_pos7)

[

Polls show artificial intelligence is broadly unpopular, prompting steps from companies to ease concerns.

](https://www.wsj.com/tech/ai/ai-companies-public-relations-ae312d79?mod=WTRN_pos7)

Continue To Article


[

Oil prices rise as concerns about ‘fragile’ cease-fire see Goldman warn of $115 crude by end of the year

](https://www.marketwatch.com/story/oil-prices-rise-as-concerns-about-fragile-cease-fire-see-goldman-warn-of-115-crude-by-end-of-the-year-0db06f0c?mod=WTRN_pos8)

[

Traders wary that traffic through the Strait of Hormuz is still restricted.

](https://www.marketwatch.com/story/oil-prices-rise-as-concerns-about-fragile-cease-fire-see-goldman-warn-of-115-crude-by-end-of-the-year-0db06f0c?mod=WTRN_pos8)

Continue To Article


[

‘Hangover’ House Perched on a Malibu Hilltop Sells for Nearly $13 Million

](https://www.mansionglobal.com/articles/hangover-house-perched-on-a-malibu-hilltop-sells-for-nearly-13-million-19fe51ad?mod=WTRN_pos9)

[

The mansion is a part of a nearly 38-acre estate that includes 5 acres of vineyards that produce 10 grape varietals

](https://www.mansionglobal.com/articles/hangover-house-perched-on-a-malibu-hilltop-sells-for-nearly-13-million-19fe51ad?mod=WTRN_pos9)

Continue To Article



Videos

Read the whole story
bogorad
10 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

At David Sacks’s Behest, White House Barrels Forward on Industry-Friendly AI Policy - WSJ

1 Share
  • Economic Focus: The Trump administration prioritizes the expansion of artificial intelligence as a primary driver for domestic economic growth and gross domestic product.
  • Strategic Dominance: White House officials emphasize the necessity of maintaining U.S. technological superiority over China as a core objective of current AI policy.
  • Labor Perspectives: Administration representatives dismiss concerns regarding job displacement, citing the belief that artificial intelligence will eventually create new and plentiful employment opportunities.
  • Regulatory Stance: Current policy approaches favor minimal government interference to encourage private sector innovation, though some recent framework adjustments suggest potential openness to workforce training programs.
  • Public Sentiment: Polling data indicates that approximately 75% of Americans believe the federal government is currently failing to implement sufficient regulatory measures for the industry.
  • Political Schisms: Internal critiques from within the administration's allies suggest that ignoring public apprehension about technology risks negative electoral consequences in the upcoming midterms.
  • State Actions: Regional authorities continue to implement localized restrictions, such as construction bans on data centers and state-specific operational requirements, independent of federal policy preferences.
  • Leadership Changes: David Sacks recently transitioned from the role of AI and crypto czar to co-chair of a White House advisory group, an adjustment viewed by some observers as a reduction in his direct influence on policy.

By

Amrith Ramkumar

and

Lindsay Ellis

April 8, 2026 9:00 pm ET

20


BPC > Only use to renew if text is incomplete or updated: | archive.is

BPC > Full article text fetched from (no need to report issue for external site): | archive.today | archive.md

White House Special Advisor for AI and Crypto David Sacks attending a meeting.

David Sacks has become the face of the Trump administration’s AI strategy. Chip Somodevilla/Getty Images

  • The Trump administration prioritizes AI’s economic benefits and U.S. tech dominance, playing down job loss concerns.

  • David Sacks, co-chair of a White House advisory group, promotes AI as an economic driver, creating jobs and boosting GDP.

  • Some allies of President Trump and a Quinnipiac poll indicate public concern about the lack of AI regulation.

This summary was generated with AI and reviewed by an editor. Read more about how we use artificial intelligence in our journalism.

  • The Trump administration prioritizes AI’s economic benefits and U.S. tech dominance, playing down job loss concerns.

    View more

WASHINGTON—At a recent gathering of tech executives and lawmakers, David Sacks pitched artificial intelligence as a driving force of the U.S. economy. 

Building data centers that run AI models is creating thousands of jobs, lifting wages for blue-collar workers and boosting gross domestic product, said Sacks, the face of the Trump administration’s AI strategy. Addressing a proposal from Sen. Bernie Sanders (I., Vt.) to ban new data centers that run AI models, Sacks told the audience at the Hill & Valley Forum: “Just think about all the damage that would do to our economic growth.”

The comments from the venture capitalist help explain President Trump’s company-friendly AI approach. Sacks and other advisers have brushed aside mounting concerns about AI, arguing that the economic benefits of the technology will make it more popular. Last year, Sacks said, “we’ve got to let the private sector cook.”

Some advisers to the president have acknowledged the unpopularity of AI, but the administration plans to continue emphasizing ahead of the midterms the importance of winning the tech race with China rather than address concerns about issues including job losses, White House officials said. The administration’s discussions about AI haven’t focused on job losses due to a belief that the technology will contribute to a booming economy with plentiful opportunities, the officials said.

That approach is stoking concern among some Trump allies, such as former adviser Steve Bannon, who warned it risks political blowback in November. Nearly 75% of Americans think the government isn’t doing enough to regulate AI, according to a recent Quinnipiac University poll.

“They’re totally out of touch with the American people on this issue,” Bannon said in an interview. He said AI has risen to be a top priority, along with immigration, for listeners of his “War Room” podcast.

Created with Highcharts 9.0.1Percentage of Americans who agree with each statement Source: Quinnipiac University pollNote: Poll of 1,397 adults was conducted from March 19-23 and included many other topics

Created with Highcharts 9.0.1Are either very concerned or​somewhat concerned about​AIThink they can trust AI hardly​ever or only some of the timeThink the government isn't​doing enough to regulate AIThink AI advancements will​reduce job opportunitiesOppose building a data center​in their communityThink AI will do more harm​than good in daily lives0%50100

The White House referenced issues linked to some voter concerns in a recent framework it published to guide AI legislation in Congress, but didn’t mention job loss. Few lawmakers and lobbyists expect a potential bill based on the framework to impose meaningful guardrails on companies, many of which donated to Trump’s inauguration and White House ballroom and promised sizable domestic investments.

“It is the policy of the Trump administration to sustain American AI dominance to protect our national security and ensure we remain the world’s leading economy,” White House spokeswoman Liz Huston said. 

Companies attributed nearly 100,000 layoffs to AI between 2023 and March 2026, according to outplacement firm Challenger, Gray & Christmas. Sacks has said AI contributes to a fraction of total layoffs and is likely overstated by executives seeking to blame the technology for their decisions. He has blamed AI fears on “doomers” spreading worst-case scenarios.

While AI isn’t a top issue for many voters, it is quickly becoming more prominent, prompting the industry to pour hundreds of millions of dollars into political-action committees. States continue to pursue their own AI bills, defying Trump’s efforts to ban state efforts. Maine is poised to ban new data-center construction. California Gov. Gavin Newsom recently signed an executive order imposing new requirements on AI firms working with the state.

“AI is in part shaping how people think about their economic prospects, and that in turn is going to influence how they’re going to come out in the polls,” said Aalok Mehta, who leads the Wadhwani AI Center at the Center for Strategic and International Studies think tank. Mehta previously worked on AI policy at OpenAI and Google.

Curtis Carmichael III, 35, said he votes with his pocketbook and has backed candidates from both parties. He left his job scouting locations for the film industry in 2024 because he found AI could do much of that work. Now a firefighter in the Atlanta area, he said he has less pay but more job security.

He said he’s skeptical when candidates don’t talk about AI. A priority for him is banning fake AI content from social media, he said. “Five years ago, AI was not even in the stratosphere,” he said, but now it is a major concern for him.

Sacks had been serving as AI and crypto czar but recently had his title changed to co-chair of a White House advisory group. Some administration officials viewed the move as a demotion of sorts for Sacks, moving him further from meaningful AI policy decisions, people familiar with the matter said. Bannon said he believes Sacks was effectively sidelined and that a potential bill based on the framework is “dead in the water.”

Other officials disputed that Sacks was demoted, and Sacks told Bloomberg TV that he had used up the days he was allowed to work as AI and crypto czar as a special government employee.

After The Wall Street Journal reached out to Sacks for comment, he posted on X that the White House’s AI framework has excellent engagement in Congress.

Republicans are divided on AI, with many seeking more government involvement to protect consumers. The White House is making some compromises by saying it wants to allow for protections and workforce training programs in its legislative framework. Such policies are difficult to implement because few know what future jobs will entail and what protections will be needed.

The latest framework still signals to some in Washington that the administration is open to some tweaks to its approach.

“There is space for AI innovation and reasonable guardrails to coexist,” said Tony Samp, head of AI policy at lobbying and law firm DLA Piper.

The Global AI Race

Coverage of advancements in artificial intelligence, selected by the editors

Get WSJ's AI Newsletter

AI-Displaced Workers Could Face Long Setbacks, Report Finds AI-Displaced Workers Could Face Long Setbacks, Report Finds

Maine Is About to Become the First State to Ban New Data Centers Maine Is About to Become the First State to Ban New Data Centers

The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT

An AI Upheaval Is Coming for Media. This Journalist Is Already All In. An AI Upheaval Is Coming for Media. This Journalist Is Already All In.

Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Already a subscriber? Sign In


Videos

Read the whole story
bogorad
10 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Elephants Only Sleep for Two Hours a Night. Here's Why | RealClearScience

1 Share
  • Elephant Sleep Patterns: Wild African elephants in Botswana exhibit an average sleep duration of two hours per day.
  • Sleep Deprivation Adaptability: These animals maintain functionality even after skipping an entire night of rest without requiring compensatory napping.
  • Species Specificity: Biological constraints inherent to elephants prevent the adoption of these sleep habits by human beings.
  • Mammalian Sleep Trends: Observations indicate an inverse correlation between physical body mass and total hours of sleep required.
  • Brain Restoration Function: Sleep serves primarily as a specialized biological state dedicated to the repair and reorganization of cerebral tissues.
  • Kleiber's Law Application: Metabolic rates scale to the 3/4 power of body mass, dictating the energy consumption patterns of various organisms.
  • Cellular Metabolic Demands: Smaller brains experience higher rates of waste production and cellular damage per unit mass compared to larger brains.
  • Comparative Biological Efficiency: The divergence in sleep needs across vast differences in animal mass is mathematically consistent with existing metabolic energy theories.

The study was quite the sensation when it was published in March of 2017. Scientists affiliated with University of the Witwatersrand in South Africa tracked African elephants in the wilds of Botswana for a month straight, and found that they only slept for two hours per day on average! What’s more, the animals could skip a night's sleep without needing extra naps the next day. Huh? How? Could we learn to sleep like that?

Alas, elephants' amazingly efficient sleep schedule is likely unique to them, not something humans could adopt through scientific means.

The study further evinced a trend that extends across the mammalian class: the bigger the mammal, the less sleep it needs. 

Some of the tiniest bats routinely sleep 18 hours a day. Cats sleep between 12 and 16. Chimpanzees doze about 10 hours. Cows snooze for about four. Yes, there are exceptions here and there – prey animals sleep less, predators more – but the trend is pretty consistent.

Gravett et al. / PLoS ONE

Gravett et al. / PLoS ONE

One of the tidiest theories to explain it was proposed almost two decades ago, and it has held up remarkably well. Scientists Van Savage and Geoffrey West at the Santa Fe Institute speculated that, "the process identified as sleep is a special state of the brain that is devoted primarily to the critical activities of repair and reorganization." Other organs and tissues could be repaired when awake, but not the brain.

Larger mammals thus need fewer hours of sleep because they have slower metabolisms than small mammals. This fact follows a biological law called Kleiber's Law, named for the biologist who discovered it. The law states that for the vast majority of animals, an organism's metabolic rate scales to the 3⁄4 power of its body mass. Pound for pound, smaller animals consume more energy.

This applies to all tissues and organs, even the brain. Each cell in the half-gram brain of a mouse demands more energy than a comparable cell inside a five kilogram elephant brain. Smaller brains thus produce proportionally more waste products and become more 'damaged' during operation per unit mass than large ones, requiring more time asleep to get them back into working order.

"The theory is able to explain sleep times that differ by a factor of seven across organisms that differ in mass by six orders of magnitude," Savage and West wrote.

It also explains why elephants can thrive with just a couple hours of shut eye.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Who Is Satoshi Nakamoto? My Quest to Unmask Bitcoin’s Creator - The New York Times

1 Share
  • Documentary Discovery: A podcast episode regarding an HBO film about the identity of Satoshi Nakamoto prompted immediate viewer interest.
  • Previous Research: The author previously abandoned a personal project dedicated to uncovering the identity of the Bitcoin creator due to its complexity.
  • Industry Impact: Bitcoin functions as a global financial revolution representing a multi-trillion dollar asset class under the control of an anonymous founder.
  • Media Consumption: The HBO production titled Money Electric The Bitcoin Mystery presented a specific Canadian developer as the likely identity behind the pseudonym.
  • Evidence Evaluation: Viewers of the documentary found the core thesis regarding the individual identified by the filmmakers to be insufficiently supported by facts.
  • Cryptographic Figure: Adam Back appears in the program during an interview filmed on location in Riga, Latvia.
  • Interview Conduct: The subject exhibited notable physical tension and requested privacy when the topic of his own potential Satoshi identity arose.
  • Behavioral Analysis: Observations of non-verbal cues and nervous mannerisms led to personal speculation regarding the veracity of the cryptographer's public statements.

One evening in the fall of 2024, my wife and I were sitting in traffic on the Long Island Expressway when, tired of listening to the jazz-funk station I often played on our drives, she switched to a podcast.

It was “Hard Fork,” the New York Times tech show, and the hosts were discussing a new HBO documentary claiming to have unmasked Bitcoin’s pseudonymous inventor, Satoshi Nakamoto.

I was instantly riveted. I had long considered the question of Satoshi’s true identity one of our age’s great enigmas and had poked at it before without success. Two years earlier, I had even spent several months researching a book on the subject. But I soon realized I was out of my depth and reluctantly gave up.

Hearing that someone else might have finally identified the shadowy figure who had revolutionized finance, spawned a $2.4 trillion industry and amassed one of the world’s biggest fortunes in one stroke of staggering genius aroused in me a mixture of admiration and envy. I couldn’t wait to watch the film. As soon as we got home that night, I logged in to the HBO Max app and pressed play.

In the end, I found the conclusion of “Money Electric: The Bitcoin Mystery” unconvincing: HBO singled out a Canadian software developer based on what seemed like very thin evidence. But as I watched what was an otherwise entertaining romp through the world of crypto, one scene caught my attention.

Adam Back, a British cryptographer and leading figure in the Bitcoin movement, sat on a park bench in Riga, Latvia, his shirt untucked under a brown coat. The filmmaker casually rattled off the names of several Satoshi suspects. At the mention of his own name, Mr. Back tensed up, strenuously denied he was Satoshi and asked that the conversation be kept off the record.

Having encountered my share of liars and developed something of an expertise in their tells, Mr. Back’s demeanor — his shifty eyes, his awkward chuckle, the jerky movement of his left hand — struck me as fishy. When the credits rolled up, I replayed the sequence several times on my TV.

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories