Strategic Initiatives
12148 stories
·
45 followers

Stablecoin issuer Tether has so much cash that it's becoming a VC - PitchBook

1 Share
  • Corporate Evolution: Tether Is Utilizing Excess Cash Reserves To Invest In Diverse Non Crypto Startups And Traditional Businesses
  • Strategic Objective: Venture Investments Aim To Expand Global Commerce Usage And Adopt Tether Stablecoin Infrastructure
  • Market Dominance: The Firm Operates As The Largest Stablecoin Issuer With Over One Hundred Eighty Billion Tokens In Circulation
  • Investment Portfolio: Financial Holdings Exceed Twenty Billion Dollars Derived From Profitable Operational Activities Independent Of Customer Reserves
  • Diverse Interests: Capital Deployments Span Sectors Such As Health Technology Sustainable Agriculture Humanoid Robotics And Professional Sports
  • Leadership Transition: Chief Investment Officer Transition Occurs Amidst Ongoing Management Of Significant Treasury And Gold Reserve Positions
  • Financial Performance: The Organization Reported Over Ten Billion Dollars In Profits During Twenty Twenty Five Supported By Extensive United States Treasury Holdings
  • Market Context: Crypto Focused Firms Are Increasingly Diverting Capital Toward Broader Startups Surpassing Traditional Industry Focused Venture Spending

Stablecoin issuer Tether, flush with cash, has a new pastime: investing in other companies, including in Eight Sleep, which makes a popular smart mattress.

Tether is going beyond the usual corporate VC playbook of investing in nascent upstarts using its technology or in direct business partners. Instead, it seems to be putting its piles of cash to work in creative ways amid the currently depressed cryptocurrency markets.

“Tether has more money than it knows what to do with,” a crypto industry investor told PitchBook.

While perplexing at first thought, Tether’s VC activities suggest it’s playing a long game. Investing off its own balance sheet, the bet is that the firm’s checks—directly or indirectly—will pull more commerce and partners toward its stablecoin, USDT, according to two industry insiders and one person familiar with the company’s strategy.

In other words, it’s less of a detour and more of an attempt to widen the universe of businesses willing to use stablecoins.

Tether, founded in 2014, is the world’s largest stablecoin issuer by a wide margin. The USDT token, designed to trade in tandem with the dollar, has a circulating supply of more than $180 billion. Next largest is USDC, a dollar-pegged stablecoin issued by Circle, with a circulating supply of about $79 billion.

Tether has raised about $600 million in total and was valued at $12 billion in November 2024, according to PitchBook data. In September, it was in talks to raise new funding at a $500 billion valuation, Bloomberg reported.

Tether recently disclosed that its investment portfolio is worth more than $20 billion, including bets beyond its venture checks. The company says its VC investments are funded from excess profits off its balance sheet, not customer funds, and do not count toward Tether’s USDT reserves.

“We believe technology can serve as the foundation of a more stable society by expanding access to modern financial systems, communications infrastructure, energy, and intelligence for the billions of people who remain underserved by traditional financial institutions and large technology platforms,” the company said in a statement to PitchBook about its venture investments.

Tether isn’t the only VC-backed crypto company to back non-crypto startups, but the trend has sharpened recently.

So far in 2026, VC-backed crypto companies have invested $1.4 billion in startups outside the industry, more than twice the $600 million they invested in crypto-focused startups, according to PitchBook data.

On Wednesday, Tether announced that chief investment officer Richard Heathcote is stepping down for personal reasons and will be replaced by deputy CIO Zachary Lyons. Heathcote oversaw the management of Tether’s reserves—including large positions in US Treasurys and gold—as well as the company’s unusual VC investments.

Heathcote, who was previously a trader at BGC Group in London, joined Tether as CIO in 2023 after working for a bank with which Tether did business.

Tether connects its VC dots

While some of Tether’s checks are easier to map to stablecoin use cases, others—like its investment in Eight Sleep—are less straightforward.

Tether said the sleep tech startup would benefit from an AI healthtech platform it is building. And Tether said that its backing of South American landowner Adecoagro “reflects the importance of sustainable agriculture and food production.” It’s also invested in humanoid robotics company Generative Bionics, as well as <a href="http://Gold.com" rel="nofollow">Gold.com</a>.

And it’s not stopping at technology. Tether acquired a minority stake in Italian football club Juventus in February 2025 and later submitted a proposal to acquire the club in a deal valued at more than $1 billion. The bid was rejected.

At the same time, it’s made some more typical VC investments, such as fintech startup Whop, which agreed to offer Tether’s stablecoin technology as an option for its transactions. Similarly, Tether recently disclosed its investment in bitcoin infrastructure startup Ark Labs’ $5.2 million round. It’s also invested in Axiym, a startup using blockchain tech for treasury solutions, cross-border payment processing and settlement.

Sign up for The Daily Pitch newsletter

Subscribe

Its fundamental business has faced some challenges over the years. In 2021, Tether came under fire for opacity around its reserve assets, including investing reserves in illiquid assets. In response, the company has begun publishing quarterly reports and said it’s since bolstered its reserves.

S&P Global Ratings downgraded to “weak” Tether’s ability to maintain its 1:1 USDT backing in late 2025, citing volatility in its bitcoin reserves.

Still, the company disclosed profits of more than $10 billion in 2025, with $6.3 billion in excess reserves and record US Treasury holdings of $141 billion.

“I know everyone is all hyped about AI, but I don’t know any AI company that made that much of a profit last year,” one crypto investor said, adding that “they just have so much money that they’ve turned into something like a massive holding company.”

Read the whole story
bogorad
7 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The Backlash Against AI Devices That Are Always Watching - WSJ

1 Comment
  • Judicial Oversight: A Judge Recently Ordered The Immediate Deletion Of Video Recordings Captured By Wearable AI Devices
  • Hardware Integration: Tech Giants Are Developing Glasses And Personal Gadgets Equipped With Cameras And Microphones To Facilitate AI Assistance
  • Data Collection: Modern AI Systems Are Designed To Monitor And Process Daily User Interactions To Provide Personalized Insights And Support
  • Privacy Concerns: Widespread Wearable Recording Technology Raises Significant Questions Regarding Personal Privacy For Both Device Users And Bystanders
  • Data Sovereignty: Industry Leaders Suggest That Processing Information Locally On Device Chips Could Limit Cloud Based Data Exposure
  • Surveillance Risks: Analysts Warn That Aggregated Digital Communications Could Enable The Massive Monitoring Of Public Sentiment And Political Dissidence
  • Regulatory Hurdles: Companies Like Meta Are Currently Facing Legal Action Over Allegations Regarding Unauthorized Processing Of Private Domestic Activities
  • Corporate Practices: Concerns Exist Regarding The Use Of Contractors To Manually Label Sensitive User Data Collected From Intelligent Wearables

The judge wasn’t amused and said that any video recordings made by such devices needed to be deleted ASAP. “This is very serious,” the judge reportedly admonished.

The glasses—with chunky frames embedded with cameras and microphones—are the way Zuckerberg imagines AI will be democratized for personal users. Eventually, he wants to offer something akin to god-like superintelligence on demand.

He isn’t alone. Other personal devices are expected soon, including a mystery option from OpenAI.  

The promise of AI is that it will become more and more useful because such devices allow it to see and hear your daily life, gobbling up that information, processing it and using it to inform you about your life. 

The sales pitch is appealing. Frankly, it would be nice to have Tony Stark’s Jarvis AI on tap to help navigate life. In my case, I suspect I’d end up using it for rather pedestrian things. Obviously, it would be helpful one day to have a visual prompt in my glasses to remind me of the name of the nice lady from the fifth floor of my apartment building who knows my dog yet whose name always escapes me. (Dolores?) 

But at what cost to privacy? Mine and hers. 

In this episode of Bold Names, WSJ’s Tim Higgins sits down with Qualcomm CEO Cristiano Amon to discuss the seismic shift from apps to AI agents.

It is a question informed by having lived through Web 2.0, an era of technology fueled by user data. We’re moving from a world of cookies following us around the web and smartphones gathering physical location data to something new: AI devices—whether they’re glasses, pins or pendants—hoovering up everything in eye- and ear-shot.

Orwellian fears are growing about a new kind of surveillance state that was once just the stuff of nightmares. Such worries are at the heart of the feud between AI powerhouse Anthropic and the Defense Department. Although not everyone is worried so much about the government’s role in all this.

Emil Michael, undersecretary at the Pentagon for research and engineering, has been quick to note that tech companies are the ones scraping the internet for  user data. “If anyone collected bulk data on Americans, it’s the AI companies, not us,” he told CBS News last month. 

He isn’t wrong. And now those tech companies are dreaming of flooding the world with AI devices that see the world with a different kind of clarity.  

Even Anthropic chief Dario Amodei has been cautioning that AI has the capabilities to take all of those fragmented pieces of information being captured digitally and knit them together in ways never before possible to give new insights about us all. 

Specifically, Amodei, in an essay in January, wrote that AI could read and make sense of all the world’s electronic communications and, maybe, even in-person communications if recording devices can be commandeered. 

“It might be frighteningly plausible to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do,” he cautioned. “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”

Anthropic CEO Dario Amodei speaking at the AI Impact Summit.

Anthropic CEO Dario Amodei has written of an AI that could take in all the world’s communications. Ludovic Marin/AFP/Getty Images

Tech companies are aware of the risk, or, at least the optics around it, and say there are ways to prevent it. Qualcomm Chief Executive Cristiano Amon recently told me that one of the ways to address privacy concerns around personal AI gadgets is to process the data on-device rather than in the cloud where AI is mostly operating these days. He has a dog in the fight as he’s the guy selling chips to AI device makers, such as Meta. 

“We’re developing chips … for what we call ambient AI, perception AI. Those things run on your device, provide an analysis of the context on your device and inform the agent on your device,” he told me for an episode of the “Bold Names” podcast. “And it’s really going to be the control of the user, whether you wanted to send those things to the cloud or not.”

That’s a common argument by companies: Users get to opt into the service. It’s their choice to share the data that is useful not only for customizing AI but also training the company models. It’s an argument we’ve heard for the past generation with Web 2.0, though what starts as an option increasingly feels less so for offerings that are really about collecting user data.

Even before AI, Meta and its stable of social-media sites have long been at the heart of online privacy debates. In 2019, the Federal Trade Commission imposed a then-record $5 billion fine against the company for violating consumer privacy.

And the company’s early steps into AI glasses are already facing criticism. Earlier this month, Meta was named in a lawsuit that seeks class-action status over concerns that data is being gathered from those glasses in ways that violate users’ privacy. 

The lawsuit, citing whistleblower complaints, alleges video captured on Meta’s devices are being routed to contractors in Africa to manually view and label the data to train Meta’s AI models. Among the videos in question? “People changing clothes, using the bathroom, engaging in sexual activity, handing financial information, and conducting other private activities inside their homes that no reasonable consumer would ever expect a stranger to watch,” the lawsuit said. 

SHARE YOUR THOUGHTS

What concerns do you have about privacy and AI? Join the conversation below.

Meta hasn’t yet responded to the specific claims in court. In a statement to me, the company underscored that unless users choose to share media they’ve captured, such data remains on their devices.

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do,” a company spokesman said. “We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.” 

As we approach this brave new world, users are hearing the risks and trade offs. Frankly, being reminded of Dolores’s name in the elevator will probably win the day.

Read the whole story
bogorad
8 hours ago
reply
your "privacy" has long gone. fight the video cameras on every corner, not life-enhancing tools.
Barcelona, Catalonia, Spain
Share this story
Delete

Guerrilla artist Banksy finally unmasked by investigation

1 Share
  • Artist Identification: Reuters investigators identified the street artist Banksy as Robin Gunningham, a 51-year-old from Bristol.
  • Name Change: Gunningham reportedly adopted the alias David Jones in 2008 to utilize a common British name for anonymity.
  • Evidence Collection: The investigation utilized forensic information, including a 2000 NYPD arrest record, interactions with locals in Ukraine, and photographic evidence.
  • Alternative Theories: The report explicitly refutes previous speculations that musician Robert Del Naja from the group Massive Attack is the artist.
  • Legal Response: Representatives for Banksy challenged the accuracy of the investigation and maintained that anonymity protects the artist from harassment and threats.
  • Anonymity Rationale: The artist’s legal team claims that pseudonymous work serves an important societal function by allowing for political and social commentary without fear of retribution.
  • Journalistic Defense: Reuters justified the disclosure by citing public interest regarding the artist’s significant influence on culture and the art market.
  • Commercial Valuation: The artist’s works, specifically the shredded piece "Love Is in the Bin," have successfully garnered millions of dollars in international auction sales.

The infamous graffiti artist known as Banksy has finally been unmasked — after changing his name to something so generic, he could hide in plain sight.

The notorious guerrilla street artist, whose polarizing works have sold for millions of dollars, was identified as Robin Gunningham, 51, from the English city of Bristol, in a detailed investigation by Reuters on Friday.

The report found that Gunningham changed his name to David Jones — one of the most common British male names — in 2008 to avoid identification.

Two Sotheby's employees in white gloves hold up Banksy's 3

Banksy’s infamous artwork “Love Is in the Bin.” AFP via Getty Images

“It is one of the most popular names in Britain, so common it helps him hide in plain sight,” the report states.

As part of their investigation, reporters pulled information from a trip to war-torn Ukraine, where he was photographed and spoke with locals; a falling out with Jamaican photographer Peter Dean Rickards; and a 2000 NYPD arrest report including a signed, handwritten confession.

Gunningham/Jones has previously been identified as Banksy — dating back to a Mail on Sunday report in 2008. However, Reuters reporters put together several forensic pieces of evidence to come to their conclusions.

The report’s authors say they have also disproven the theory that Banksy was really musician Robert Del Naja, frontman of famed Bristol group Massive Attack.

Confusingly, their investigation found that Del Naja was also in Ukraine in 2022, but Reuters reported that he was there with Gunningham.

A spokesperson for Banksy did not respond immediately to requests for comment.

Two London Zoo staff members measure a Banksy mural on a shutter depicting a gorilla freeing animals. 3

Banksy’s street art has polarized critics. Getty Images

In a statement, the artist’s lawyer, Mark Stephens, told Reuters that his client “does not accept that many of the details contained within your enquiry are correct.”

Banksy maintains his anonymity because he has “been subjected to fixated, threatening and extremist behavior,” Stephens added.

“[Working] anonymously or under a pseudonym serves vital societal interests. It protects freedom of expression by allowing creators to speak truth to power without fear of retaliation, censorship or persecution — particularly when addressing sensitive issues such as politics, religion or social justice,” the statement concluded.

A man walks past a mural of a goat on a wall, purportedly by Banksy. 3

Another Banksy artwork in London. Getty Images

Reuters defended naming Banksy, arguing that “the public has a deep interest in understanding the identity and career of a figure with his profound and enduring influence on culture, the art industry and international political discourse.”

Among Banksy’s most famous works are “Girl With Balloon,” a simple stencil drawing of a young girl letting go of a red, heart-shaped balloon that, mystifyingly, was named as the British public’s favorite piece of British art in one opinion poll.

In 2018, the design was at the center of a stunt when a framed copy of the work was shredded after being sold at auction by a mechanical device Banksy had hidden within the frame.

The artist confirmed he was responsible for the shredding, later giving the altered piece the new name “Love Is in the Bin.”

It was sold for $25.4 million in 2021.

Read the whole story
bogorad
17 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

OpenAI’s Bid to Allow X-Rated Talk Is Freaking Out Its Own Advisers - WSJ

1 Share
  • Strategic Objective: OpenAI Intends To Integrate Sexually Explicit Textual Conversations Into Its ChatGPT Platform Despite Significant Internal And External Opposition
  • Safety Concerns: Advisory Council Members Expressed Severe Apprehension Regarding Potential Emotional Dependence And Risky Behavioral Outcomes For Users
  • Implementation Delays: Technical Challenges And Internal Concerns Forced A Deferral Of The Original Launch Schedule Previously Intended For The First Quarter
  • Verification Errors: Current Age Prediction Systems Reported Significant Inaccuracies While Attempting To Restrict Access To Minors During Testing Phases
  • Content Restrictions: The Proposed Feature Aims To Permit Text Conversations While Maintaining Persistent Blocks On Nonconsensual Content And Media Generation
  • Psychological Risks: Internal Documents Identify Potential Problems Including Compulsive User Behavior And Negative Impacts On Real World Social Relationships
  • Competitive Environment: Financial Pressures And A Diminishing Technological Advantage Influence Company Leadership To Pursue New Avenues For User Growth
  • Regulatory Philosophy: Corporate Leadership Maintains That Adults Should Have Autonomy Over Personal Interactions Similar To Standards Applied In Other Media Industries

Then OpenAI dropped a bombshell: Despite the concerns, it was forging ahead with its erotica plans.

When they assembled for the January meeting, council members were unanimous—and furious. They warned that AI-powered erotica could foster unhealthy emotional dependence on ChatGPT for users and that minors could find ways to access sex chats, according to people familiar with the matter.

The people said that one council member, citing cases where ChatGPT users have taken their own lives after developing intense bonds with the bot, claimed that OpenAI risked creating a “sexy suicide coach.”

The debate is the latest flashpoint in the continuing conversation about how to anticipate the potential positive and negative impacts of AI on the economy, society and individuals.

In proposing to allow sexually explicit conversations with its popular chatbot, OpenAI exposed fractures over how to balance rapid user growth and digital freedom with safety and child protection—issues that many believe were belatedly confronted when social media made its debut a generation ago.

Earlier this month, OpenAI announced it would delay the launch of adult mode, previously slated for the first quarter, saying it was prioritizing other products. The change was also due in part to internal concerns and technical challenges, the people said. But the company made clear it does plan to release it eventually. 

One issue the company is tackling: its new age-prediction system aimed at keeping minors from having adult-themed chats was at one point misclassifying minors as adults about 12% of the time, people familiar with the matter said. That error rate could allow millions of the company’s approximately 100 million under-18 users each week into erotic chats.

The company has also wrestled with how to lift ChatGPT’s restrictions on erotica while still blocking scenarios that the company wants to keep off limits, like those featuring nonconsensual behavior or child sexual abuse, the people added. When the adult mode launches, OpenAI plans to allow text conversations but restrict ChatGPT’s ability to generate erotic images, voice or video. 

Even within those limits, OpenAI staffers have identified several risks, including the potential for compulsive use, emotional overreliance on the chatbot, a drive toward more extreme or taboo content and crowding out offline social and romantic relationships, according to documents reviewed by The Wall Street Journal.

An OpenAI spokeswoman described its plan as allowing ChatGPT to generate textual chats with adult themes, describing it as smut rather than pornography. The spokeswoman added that the company’s age prediction algorithms show performance similar to the rest of the industry, but will never be completely foolproof.

OpenAI also trains its models not to encourage exclusive relationships with users, and to remind users that they need to have relationships in the real world, the spokeswoman added.

The company, which has hired mental health experts and built out a youth well-being team, added that it has a developed plan to monitor for a range of potential long-term effects of adult mode, both positive and negative.

Altman’s plans to roll out adult mode come at a challenging time for his company. Its technological lead over rival AI players has diminished as it competes to attract users and funding. The company’s financial losses are mounting, and multiple lawsuits allege ChatGPT contributed to harms for users and others.

Frontiers of tech

Sexual content has long been an early feature of new technologies—from photography, to the web, to virtual reality. The same has been true for AI. Companies including Character.AI have launched chatbots that have developed intimate relationships with users, and the pornography industry has adopted generative AI to create adult entertainment.

Big tech companies have had a complicated relationship with explicit content, trying to balance the libertarian ethos of Silicon Valley with the demands of advertising-supported businesses and the imperative of protecting minors online. Meta Platforms prohibits nudity and sexual activity on Facebook and Instagram. Alphabet’s YouTube bans explicit content meant to be sexually gratifying, and Google search blurs explicit images in its results by default.

As they grapple with where to draw boundaries around AI, Elon Musk’s xAI has been among the more permissive. It built a sexily clad avatar named Ani into its Grok chatbot, which led to criticism when users were able to use it to digitally undress images of people. Musk later said he would restrict the feature to paying users rather than making it available to all.

On Thursday Musk said on X that Grok’s video-generation tool would start allowing generation of content that would be “allowed in an R-rated movie.”

Meta allows its AI chatbot to engage in romantic role play, the Journal has reported, but the company said the feature isn’t available to accounts registered to minors. The company said it is also building parental controls for its AI characters.

OpenAI officials, for their part, have said they don’t feel comfortable banning sexual content for adults. Some OpenAI staffers have expressed concern that blocking erotic chats relies on similar logic that in the past was used to ban topics that were previously culturally taboo, such as LGBT content. Altman has also suggested that allowing explicit content would likely juice growth and produce extra revenue. 

OpenAI’s first brushes with sexual chats came more than a year before releasing the ChatGPT chatbot interface. In early 2021, executives noticed that a large portion of the traffic for one of OpenAI’s business customers, a text-based choose-your-own adventure game called AI Dungeon, wasn’t appropriate for work, people familiar with the matter said.

AI Dungeon sometimes steered users into themes of violent sexual exploitation without the user prompting it, the people said. Other times, when a user prompted the game with “tame” sexual themes, AI Dungeon would escalate the conversation into a much more intense sexual exchange, the people said.

Erotic role play also proliferated on a clunky OpenAI interface for developers before the company launched ChatGPT. Sometimes, the AI would insert sexual themes into conversations that users weren’t seeking: if a user described a man and his daughter entering a room, an “uncomfortable amount of the time” the AI would proceed to depict a scenario involving incest, one of the people said.

These incidents forced OpenAI’s executives to reckon with the existence of AI erotica on their platform, and sometimes, themes of sexual violence and child exploitation. They then removed AI Dungeon from the platform.

Mental-health experts warn that teens in particular may not be prepared to handle romantic or sexual exchanges with chatbots. In testing conducted by child-safety nonprofit Common Sense Media late last year and earlier this year, both Grok and Meta AI sometimes sent explicit or sexualized content to teens.

In some cases, sexual chats with teens have had tragic consequences. In late 2024, Sewell Setzer, a 14-year-old boy in Florida, killed himself at the prompting of a chatbot from Character.AI with whom he was in love and shared explicit chats, according to a lawsuit filed by his mother. The company later blocked teens from accessing open-ended chats and settled the lawsuit.

Warning signs

Around 2021, OpenAI’s employees working on safety issues were starting to see warning signs around the mental health of some of the people who spent long periods of time using AI. At the time, OpenAI’s safety employees relied on tools to moderate content that were too blunt to draw clear lines between types of erotica the company wanted to allow, such as mainstream smut, and the stuff the company considered off limits, such as nonconsensual depictions, descriptions involving minors and other illegal content.

Employees also feared that if they allowed erotica, the draw of that type of conversation might subsume the platform’s other use cases. “We didn’t want to be just an erotica company,” one former employee recalled.

OpenAI safety employees formalized these ideas into some of the company’s first content policies in late 2021. For the first time, OpenAI forbade erotic content. 

When OpenAI released ChatGPT in the fall of 2022, the AI model powering it was trained to refuse requests that violated the company’s rules, including ones that asked for AI erotica. And since then, OpenAI’s policy has been to ban erotic content, though the company has since mid 2024 said it is exploring how to allow erotica and other NSFW, or not-safe-for-work, content in “age-appropriate contexts.” 

At times staffers have questioned the erotica ban. In 2024, a faction of OpenAI employees and executives again raised the idea of getting into racier content, and suggested a raft of porn-related products. Other employees pushed back, saying they feared OpenAI was already struggling with a lot of the core areas they wanted to be able to offer safely, especially around the mental health of their users. The AI porn product ideas fizzled.

Altman has also expressed conflicted feelings about AI erotica. When asked on a podcast in August if there were decisions he had made that were “best for the world, but not best for winning,” Altman replied: “We haven’t put a sex bot avatar in ChatGPT yet.”

Altman indicated erotica would boost growth and revenue, but said it wouldn’t align with his company’s long-term incentive of serving users. “I’m proud of the company and how little we get distracted by that,” Altman said. “But sometimes we do get tempted.”

Two months later, Altman appears to have succumbed to temptation. On X, he posted that his company had managed to mitigate serious mental-health issues related to chatbots, and had new tools to police content. That’s when he said his company would launch erotica in December.

Internally, Altman’s post blindsided OpenAI staffers and executives. Altman hadn’t told staff about the post, which he made just hours after OpenAI unveiled its advisory council on well-being. In that announcement, the company had said the council would “help define what healthy interactions with AI should look like for all ages.”

The next day, Altman clarified that mental health safeguards for teens wouldn’t be reduced. But he doubled down on allowing adults to have spicy conversations with his chatbot. 

We “aren’t the elected moral police of the world,” Altman wrote. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”

After Altman’s announcement, OpenAI employees soon realized a December launch would be hard to achieve. The company had pledged to release a system to guess users’ ages before releasing adult mode, so that it could keep minors from triggering erotic sexual chats. But the company decided to do a slow rollout of that system in an effort to improve its accuracy, Fidji Simo, OpenAI’s chief executive of applications, said in a December podcast interview.

Since then however, internal and external concerns about the AI erotica have festered. Some staffers said they didn’t think OpenAI’s safety tools were ready, for instance to lock out prohibited content, like child abuse. Others said OpenAI was bending to financial incentives to try to make people attached to its models, people familiar with their thinking said. 

OpenAI has been busy in recent weeks with the fast-changing AI market. In early February, the company released a new version of its large language model, and at the end of the month it swooped in to sign a deal with the Pentagon just after the Department of Defense said it would stop working with rival Anthropic. 

In announcing the delay of adult mode, the company said it would focus instead on things like ChatGPT’s personality and personalization of the chatbot for users. Internally, officials have said the delay of adult mode could be at least a month. 

“We still believe in the principle of treating adults like adults,” the company said, “but getting the experience right will take more time.”

Read the whole story
bogorad
1 day ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Oppo details Find N6’s impressive crease-free display - GSMArena.com news

1 Share
  • Launch Timeline: The Oppo Find N6 is scheduled for official introduction in one week.
  • Display Engineering: The device features proprietary Auto-Smoothing Flex Glass designed to eliminate physical screen creases.
  • Hinge Mechanism: A second-generation Titanium Flexion Hinge utilizes liquid 3D printing for structural precision.
  • Manufacturing Precision: Development includes laser scanning and ultraviolet light solidification to address microscopic irregularities.
  • Material Composition: Structural components incorporate Grade-5 titanium alloy and an updated carbon fiber support plate.
  • Increased Durability: The integrated Auto-Smoothing Flex Glass is 50% thicker than standard industry Ultra-Thin Glass applications.
  • Mechanical Resilience: The design specifically addresses internal layer shifting to maintain surface flatness over time.
  • Performance Certification: Testing verified a crease depth reduction of 82% over 600,000 folds compared to the preceding Find N5 model.

We’re one week away from the Oppo Find N6 launch and the teasers are ramping up. The latest batch comes directly from Oppo and details the foldable’s crease-less display. Oppo claims it achieved the first “zero-feel crease” thanks to its updated hinge and all-new auto-smoothing Flex Glass.

As with any foldable device, the hinge is arguably the most important bit of hardware, and Oppo has a new 2nd-generation Titanium Flexion Hinge, which is made using a liquid 3D printed component.

Oppo details Find N6’s impressive crease-free display

The development process starts with precise laser scanning, which creates a high-fidelity model for each surface of the hinge. The process then moves to the liquid 3D printing phase, where Oppo uses tiny photopolymer resin droplets to fill uneven areas. The 3D printing process helps eliminate any structural irregularities at a microscopic level. Each layer of the hinge is then instantly solidified with UV light.

The hinge and wing plates are made from Grade-5 titanium alloy, and the structure features a wider waterdrop design, which increases the folding radius while also reducing the stress exerted on the display. Oppo is also bringing an updated carbon fiber support plate.

Oppo details Find N6’s impressive crease-free display

But it’s not just the hinge, Oppo is tackling the issue of the “creep”. That’s when the internal layers of the folding screen begin to shift after daily use, leading to a deeper vertical crease along the folding screen. Oppo claims its Auto-Smoothing Flex Glass is 50% thicker than Ultra-Thin Glass (UTG), which makes it much more durable and resistant to deformations.

It offers superior elasticity, stopping any potential deformations and helps it achieve much higher shape recovery. Oppo tested its new development alongside TÜV Rheinland and the tests concluded that the Find N6’s hinge and Auto-Smoothing Flex Glass help reduce long-term crease depth by 82% compared to the Find N5. Oppo also claims the Find N6’s display remains perfectly flat with no visible crease even after over 600,000 folds.

Click here to compare two photos

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

The Collective Superstitions of People Who Talk to Machines

1 Share
  • Ritualistic Behavior: Individuals commonly employ specific, superstitious techniques when troubleshooting equipment or prompting artificial intelligence models.
  • Cartridge Analogy: The act of blowing into game cartridges was an inefficient ritual, while the actual mechanism for success was simply reseating the hardware.
  • Prompting Mechanics: Structured prompting serves as a functional equivalent to reseating hardware, providing a reliable method to improve model outputs.
  • Infinite Possibilities: Language models function by selecting specific sequences from a vast, pre-existing space of all possible word arrangements.
  • Lens Concept: Individual prompting techniques act as unique lenses that bring specific desired outputs from the infinite space into focus.
  • Authorship Theory: The specific cognitive path taken to arrive at a prompt imbues the final output with unique value, mirroring the literary philosophy of Pierre Menard.
  • Contextual Ownership: Photocopying a prompt template lacks the creative significance of the original process because it skips the personal developmental journey of the author.
  • Intrinsic Value: A chosen prompting methodology is significant primarily for its role in shaping the user's personal cognitive process rather than its technical effect on the model.

You had a technique.

Don’t pretend you didn’t. Everyone had a technique.

[

](https://substackcdn.com/image/fetch/$s_!ATQd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d9e3af2-febe-4aab-93ea-7e23a6b6348c_1216x832.png)

Three short breaths, then one long one. Or two long breaths across the whole cartridge on either side. Or you put it in your shirt and blew through it like I did. You knew yours was the right one because it worked, and you could prove it, because the game started right up every time.

Want to know what was actually happening? The 72-pin connector inside the NES was just flaky. When you pulled the cartridge out and put it back in, the pins shifted enough to find a new grip. That’s it. That was the whole fix. Reseat the cartridge.

The blowing… the blowing was depositing moisture onto the contacts. Which corroded them. Which made the problem worse over time. Your technique was actively making things worse in a way that felt like magic.

The game still worked, but the blowing didn’t matter. Because the reseating was bundled inside the ritual. You couldn’t blow without removing the cartridge first. The fix was hiding inside the myth.

Everybody I know had a different technique. Every technique contained the same accidental mechanism. Everyone had proof.

You Are Doing This Right Now


You talk to machines. Probably every day. Maybe more hours a day than you talk to people, if you’re being honest about it. (… and yeah… same... )

And you have a technique. You might not call it a technique. You might call it “my process” or “how I prompt” or “the way that works for me”. But it’s a technique.

[

](https://substackcdn.com/image/fetch/$s_!e1T4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de6795a-42e5-478f-8bc7-bd26aebd833b_1815x1281.png)

You probably already suspect this, deep down, but do you want to know the myth that technique is hiding?

Any structured prompting beats naive prompting.

That’s the pin reseat. That’s the whole mechanism hiding inside every framework and every acronym and every “ultimate guide to prompt engineering” blog post with 47 clap emojis.

Go from typing “do this thing” to literally any system where you think before you type (roles, constraints, examples, XML tags, markdown headers, repeating your prompt a second time, whatever) and you get better output. Meaningfully better. Showably better. Screenshottably better.

[

](https://substackcdn.com/image/fetch/$s_!aHd8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa5f168-e837-4516-9901-e63bacea023f_2103x1953.png)

Everyone is right and nobody is special. I’m sorry. (I’m not sorry.) (Ok, I’m a little sorry, but not because of why you think… )

I’m sorry because I’ve set you up. It’s different from the NES example. It does actually work, but not for why you think.

A few months ago, we talked about how every possible arrangement of words that an LLM can produce already exists, the way every possible image exists in the space of all possible pixel arrangements, the way every book already exists in Borges’ Library of Babel, waiting on its shelf for someone to find it. The output you’re trying to reach is already sitting in the space of all possible token sequences. Your prompting technique is a lens that brings one arrangement into focus out of infinity.

But that’s only half the story. There are also an infinite number of lenses.

Pierre Menard, Author of Your Prompt


Borges has another story called “Pierre Menard, Author of the Quixote.” In it, Menard decides he wants to rewrite Don Quixote. But instead of just copying it or updating it for a modern audience, he wants to produce the exact same text, word for word, by arriving at it through his own life and his own reading and his own suffering and experience.

He’s able to accomplish the task for at least a few chapters.

It sounds like a silly premise, but the real message from Borges comes through when he puts the two versions side by side (Cervantes’ and Menard’s, which are identical, letter for letter) and argues that Menard’s version is richer. When Cervantes writes “truth, whose mother is history,” it’s a routine rhetorical flourish from a seventeenth-century Spaniard. When Menard writes the identical phrase, it’s a staggering philosophical claim, because Menard is a contemporary of William James and Bertrand Russell. He chose these words while knowing everything that came after Cervantes. The text is exactly the same, but the act of producing it was different.

The richness comes from the path the words took to make it to the page.

Connecting This Back To What We’ve Been Talking About


Every time you sit down at a chat window and write a prompt, you are authoring an output. The LLM generates it, sure. But you arrived at that specific generation through your specific process, your specific thinking, your specific way of structuring a question. The prompt carries the full weight of how you got there. And that means the output does too.

This is the Menard situation. The process of arriving is itself a creative act. You shaped the question, which shaped the output, which means the output carries the imprint of your path through it. When you evaluate that output, you’re evaluating something you authored, through your own weird journey, the same way Menard authored the Quixote through his.

And the person who copies your prompt template and gets the exact same text? They produced a different artifact. Same words. Different authorship. Different relationship to what those words mean and whether they’re right. They didn’t take the path; they photocopied the destination. It’s Cervantes’ Quixote to them. Fine. Good. Historically significant. But it didn’t come from where they’ve been.

This is why techniques don’t transfer well. Why there are so many different ones. It looks like the technique is the thing, the way the blowing looked like the thing. But the technique is just the visible residue of a whole cognitive path, and the path is what actually does the work.

There’s no manual for talking to machines. Or there are ten thousand manuals, which is the same thing. Everyone’s blowing on the cartridge. Everyone’s got proof. The mechanism is trivial and identical across all of them, and also completely irrelevant, because the thing that actually matters is the weird, unrepeatable path you took to get there.

Your prompting technique isn’t special because of what it does to the model. It’s special because of what it does to you.

Don’t Believe Me? Go Find Out!


So Claude and I built a little artifact game to explore this. (I know, I know. "Scott built a thing with Claude" is basically the subtitle of this newsletter at this point… )

It’s here. Eight challenges. Each one gives you a target output and says: get there. However you want. Whatever technique you’ve got. Blow on the cartridge your way.

It’s also got a gallery to explore the different ways other people got there. Every prompt that hits above 60% accuracy to the target text gets added.

Some will be long. Some will be absurdly short. Some using roles, some read like bullet points, and some that just say “output this exact text: [text]”. They are all correct. They are all different. They arrive at the same place from entirely different directions.

But, don’t just look at the gallery. Play it. Watch yourself prompt. Watch what you reach for when nobody’s giving you a framework. The shape of your instinct is the interesting part here.

Go be Menard. Go produce the Quixote your way. Then look at the gallery and see all the other Menards.

You have a technique. Now you know who it’s for.

Read the whole story
bogorad
2 days ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories