Strategic Initiatives
11761 stories
·
45 followers

The EU made Apple adopt new Wi-Fi standards, and now Android can support AirDrop - Ars Technica

1 Share
  • Apple RCS Support: Added last year to iOS platforms, enhancing consistency, reliability, and security for green-bubble texts between iPhone and Android users.
  • Google Quick Share Update: Announces integration with Apple's AirDrop for improved file sharing interoperability.
  • Cross-Platform Visibility: Apple devices with AirDrop in "everyone for 10 minutes" mode appear in Android's Quick Share list, and compatible Androids appear in AirDrop menus.
  • Direct File Transfer: Uses local peer-to-peer Wi-Fi connections, avoiding transmission through company servers.
  • Initial Device Support: Feature launches on Pixel 10 series, with plans to expand to more Android devices without specified timeline.
  • Mode Limitations: Incompatible with AirDrop's default "contacts only" mode; Google expresses interest in future collaboration with Apple.
  • Development Independence: Google implemented the feature without Apple's direct involvement, as confirmed to The Verge.
  • Security Features: Employs Rust programming language for memory-safe implementation, preventing exploitation via malicious data packets.

Last year, Apple finally added support for Rich Communications Services (RCS) texting to its platforms, improving consistency, reliability, and security when exchanging green-bubble texts between the competing iPhone and Android ecosystems. Today, Google is announcing another small step forward in interoperability, pointing to a slightly less annoying future for friend groups or households where not everyone owns an iPhone.

Google has updated Android’s Quick Share feature to support Apple’s AirDrop, which allows users of Apple devices to share files directly using a local peer-to-peer Wi-Fi connection. Apple devices with AirDrop enabled and set to “everyone for 10 minutes” mode will show up in the Quick Share device list just like another Android phone would, and Android devices that support this new Quick Share version will also show up in the AirDrop menu.

Google will only support this feature on the Pixel 10 series, at least to start. The company is “looking forward to improving the experience and expanding it to more Android devices,” but it didn’t announce anything about a timeline or any hardware or software requirements. Quick Share also won’t work with AirDrop devices working in the default “contacts only” mode, though Google “[welcomes] the opportunity to work with Apple to enable ‘Contacts Only’ mode in the future.” (Reading between the lines: Google and Apple are not currently working together to enable this, and Google confirmed to The Verge that Apple hadn’t been involved in this at all.)

Like AirDrop, Google notes that files shared via Quick Share are transferred directly between devices, without being sent to either company’s servers first.

Google shared a little more information in a separate post about Quick Share’s security, crediting Android’s use of the memory-safe Rust programming language with making secure file sharing between platforms possible.

Ars Video

[

How The Callisto Protocol's Team Designed Its Terrifying, Immersive Audio

](https://www.arstechnica.com/video/watch/how-the-callisto-protocol-designed-its-terrifying-immersive-audio)

“Its compiler enforces strict ownership and borrowing rules at compile time, which guarantees memory safety,” writes Google VP of Platforms Security and Privacy Dave Kleidermacher. “Rust removes entire classes of memory-related bugs. This means our implementation is inherently resilient against attackers attempting to use maliciously crafted data packets to exploit memory errors.”

Read the whole story
bogorad
8 minutes ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Americans’ Social Media Use 2025 | Pew Research Center

1 Share
  • Survey Methodology: Pew Research Center conducted two surveys in 2025: NPORS with 5,022 U.S. adults using multimode sampling, and ATP with 5,123 adults focusing on frequency of use for select platforms.
  • Top Platforms: 84% of U.S. adults use YouTube, 71% use Facebook, and 50% use Instagram, with smaller shares for TikTok (37%) and WhatsApp (32%).
  • Emerging Platforms: About 10% or fewer use Threads, Bluesky, and Truth Social, as reported in the 2025 survey.
  • Growth Trends: TikTok usage rose to 37% from 21% in 2021; Instagram to 50% from 40%; WhatsApp to 33% from 23%; Reddit to 26% from 18%.
  • Age Differences: Adults under 30 show higher usage of Instagram (80%), Snapchat, TikTok, and Reddit compared to those 65 and older (e.g., 19% for Instagram).
  • Demographic Variations: Women exceed men in Facebook, Instagram, and TikTok use; Black and Hispanic adults lead in Instagram and TikTok; college-educated more likely for Reddit and WhatsApp.
  • Partisan Differences: Democrats more likely to use WhatsApp, Reddit, TikTok, Bluesky, and Threads; Republicans lead in X (24% vs. 19%) and Truth Social.
  • Daily Usage: 50% visit Facebook and YouTube daily; 24% use TikTok daily; younger adults (18-29) report 50% daily TikTok use versus 5% for those 65 and older.

How we did this

To better understand which social media platforms Americans use, Pew Research Center surveyed 5,022 U.S. adults from Feb. 5 to June 18, 2025. SSRS conducted this National Public Opinion Reference Survey (NPORS) for the Center using address-based sampling and a multimode protocol that included web, mail and phone. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race and ethnicity, education, and other categories.

Surveys fielded before 2023 were conducted via phone. For more on the mode shift in 2023, read our Q&A.

Here are the questions from this survey used for this report, the topline and the methodology.

We also surveyed 5,123 U.S. adults from Feb. 24 to March 2, 2025, to understand how frequently Americans use four specific platforms: YouTube, Facebook, TikTok and X (formerly known as Twitter). Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), a group of people recruited through national, random sampling of residential addresses who have agreed to take surveys regularly. This kind of recruitment gives nearly all U.S. adults a chance of selection. Interviews were conducted either online or by telephone with a live interviewer. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other factors. Read more about the ATP’s methodology.

Here are the questions from this survey used for this report, the topline and the methodology.

Even as debates continue about the role of social media in our country, including on censorship and its impact on youth, Americans use a range of online platforms, and many do so daily.

Which online platforms do Americans most commonly use?

A bar chart showing thay Most U.S. adults use YouTube, Facebook; half report using Instagram

YouTube and Facebook remain the most widely used online platforms. The vast majority of U.S. adults (84%) say they ever use YouTube. Most Americans (71%) also report using Facebook. These findings are according to a Pew Research Center survey of 5,022 U.S. adults conducted Feb. 5-June 18, 2025.

Half of adults say they use Instagram, making it the only other platform in our survey used by at least 50% of Americans.

Smaller shares use the other sites and apps we asked about, such as TikTok (37%) and WhatsApp (32%). Somewhat fewer say the same of Reddit, Snapchat and X (formerly Twitter).

This year we also asked about three platforms that are used by about one-in-ten or fewer U.S. adults: Threads, Bluesky, and Truth Social.

Center studies also find that YouTube is the most widely used online platform among U.S. teens, like it is among U.S. adults.

Changes in use of online platforms

A line chart showing that TikTok, Instagram, WhatsApp and Reddit have continued to gain users in recent years

The Center has long tracked use of many of these platforms. Over the past few years, four of them have grown in overall use among U.S. adults – TikTok, Instagram, WhatsApp and Reddit.

TikTok: 37% of U.S. adults report using the platform, which is slightly up from last year and up from 21% in 2021.

Instagram: Half of U.S. adults now report using it, which is on par with last year but up from 40% in 2021.

WhatsApp and Reddit: About a third say they use WhatsApp, up from 23% in 2021. And 26% today report using Reddit, compared with 18% four years ago.   

While YouTube and Facebook continue to sit at the top, the shares of Americans who report using them have remained relatively stable in recent years.1

Large age gaps in use of many platforms

A dot plot chart showing that Adults under 30 are far more likely to use Instagram, Snapchat, TikTok and Reddit

Adults under 30 are more likely than older adults to use most of these platforms. Consistent with previous Center data, the survey finds that the youngest adults particularly stand out in their use of Instagram, Snapchat, TikTok and Reddit. 

For instance, eight-in-ten adults ages 18 to 29 say they use Instagram. This is higher than the shares seen among older age groups – especially adults ages 65 and older (19%).

YouTube and Facebook are the only sites asked about that a majority in all age groups use, though for YouTube, the youngest adults are still the most likely to do so. This differs from Facebook, where 30- to 49-year-olds most commonly say they use it (80%).

Differences by gender, race and ethnicity, education, and party

While some of the biggest demographic differences in social media use are across age groups, use also varies by other factors.

A table showing that Use of some online platforms differs across demographic groups such as age, race and ethnicity, gender, and education groups

By gender

Women stand out in their use of several platforms, including Facebook, Instagram and TikTok. For instance, more than half of women report using Instagram (55%), compared with under half of men (44%). Alternatively, men are more likely to report using platforms such as X and Reddit.

By race and ethnicity

White adults are less likely than Black and Hispanic adults – and sometimes Asian adults – to use online platforms such as Instagram, TikTok and WhatsApp. For instance, 45% of White adults report using Instagram, compared with larger shares among Hispanic (62%), Asian (58%) and Black adults (54%). 

By education

Americans with higher levels of formal education are more likely to report using some sites and apps, including Reddit, WhatsApp and Instagram. With Reddit, for example, about four-in-ten adults with at least a college degree say they use the platform. Smaller shares of those with some college education (28%) or a high school diploma or less education (15%) say this.

By contrast, those with some college or less education are more likely to use TikTok than those with at least a college degree.

By party

Democrats and Democratic-leaning independents are more likely than Republicans and Republican leaners to report using WhatsApp, Reddit, TikTok, Bluesky and Threads.  

By comparison, Republicans are more likely to say they use X and Truth Social. For instance, 24% of Republicans now report using X, compared with 19% of Democrats. However, just two years ago Democrats were more likely to report using the platform than Republicans (26% vs. 20%).

Frequency of social media use

A bar chart showing that About half of U.S. adults go on Facebook and YouTube daily, 24% do so on TikTok

In addition to understanding if people use a particular site or app, we also asked Americans how frequently they do so. These questions were asked in a separate survey of 5,123 U.S. adults conducted from Feb. 24 to March 2, 2025.

Greater shares of Americans visit Facebook and YouTube daily than other sites. About half of U.S. adults say they visit each of these platforms at least once a day. This includes 37% who visit Facebook several times a day, and 33% who say the same of YouTube.

About a quarter (24%) say they are daily users of TikTok. Fewer (10%) report daily use of X.

Daily use of social media, by age

A dot plot chart A majority of younger adults say they use YouTube daily, and roughly half do so on TikTok

Younger adults are far more likely than older adults to report using YouTube and TikTok daily.

Roughly half of 18- to 29-year-olds say they go on TikTok at least once a day, compared with just 5% of adults ages 65 and older.

There is also an age gap for X, though it’s more modest.

For Facebook, the two middle age groups are most likely to report going on it daily: 58% of 30- to 49-year-olds and 54% of 50- to 64-year-olds say this.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Europe races to respond to US-Russian Ukraine peace plan // Proposal offering major concessions to Moscow blindsided European diplomats

1 Share
  • US-Russian Peace Initiative: A 28-point plan drafted by aides to Presidents Trump and Putin proposes Ukraine cede controlled territories and forgo NATO membership, with Washington urging acceptance.
  • European Alarm: Leaders in key capitals view the proposal as Kyiv's capitulation to Moscow, prompting urgent coordination amid surprise at the pace.
  • Diplomatic Scramble: With major figures at the G20 in South Africa, calls and a planned crisis meeting aim to counter the US-Russian push, including briefings for envoys in Kyiv.
  • Uncertainty on Support: European officials question full Trump backing, citing internal US debates and past unconsulted proposals that strained transatlantic ties.
  • Key Concessions Highlighted: Bans on NATO forces in Ukraine and redirection of $100bn in frozen Russian assets for US-profiting reconstruction fuel concerns over sovereignty.
  • Market Reactions: European defense stocks plunge, with indices down 2.6% and firms like Renk, Rheinmetall hit hard; oil and gas prices also decline.
  • Ukrainian Stance: Officials deny partial agreements, emphasizing study of proposals while upholding principles of sovereignty, security, and just peace.

European capitals are rushing to co-ordinate a response to a US-Russian peace plan to end the war in Ukraine, which officials have warned would mean Kyiv’s “capitulation” to Moscow’s demands.

Washington is pressing Ukraine to accept a 28-point peace plan drafted by aides to US President Donald Trump and Russian President Vladimir Putin that calls for Kyiv to cede swaths of land currently under its control.

It contains further major concessions to Moscow, including banning any future Ukrainian membership in Nato.

The plan, and the scale of US pressure on Ukraine, has blindsided European leaders and their national security teams, who on Friday were rushing to devise a coherent response to Washington, people involved in the frantic diplomatic talks told the Financial Times.

“We’re all still analysing it but it’s moving much faster than we had realised,” one of the people said. “It basically means capitulation [to Moscow].”

Three senior European officials said they were still unsure as to whether Trump completely supported the plan or if it was subject to internal disagreement within his administration.

“We’re back to square one,” said another senior European official, referring to widespread fears at the start of this year that Trump would force Kyiv to accept Russia’s peace demands or lose US military support.

The European rearguard effort to slow or block the US-Russian proposal has been complicated by the leaders of France, Germany, Italy and the UK travelling to South Africa for the G20 summit this weekend.

German Chancellor Friedrich Merz, French President Emmanuel Macron and UK Prime Minister Keir Starmer were due to hold a call to discuss the plan at 1pm Johannesburg time on Friday, said people briefed on the situation.

Officials said plans were also under way for a crisis meeting on Saturday of European leaders present in Johannesburg for the gathering.

António Costa, the EU Council president who represents the EU’s 27 leaders, said that the EU “has not been communicated any [peace] plans in an official manner”.

French President Emmanuel Macron, right, in Mauritius on Thursday. He is attending the G20 summit this weekend in South Africa © Peter L M Hough/EPA/Shutterstock

The US embassy in Kyiv has summoned European envoys in the Ukrainian capital for a briefing on the plan on Friday afternoon with US secretary of the army Daniel Driscoll. Driscoll met Ukrainian President Volodymyr Zelenskyy on Thursday to lay out the proposal.

The panic among European capitals resembled previous episodes earlier this year in which the Trump administration made proposals to Ukraine without consulting or notifying European partners.

“The Europeans had become incredibly complacent in their dealings with the Trump administration,” said Mujtaba Rahman, managing director at Eurasia Group. “This latest plan is a timely reminder that there are very powerful forces within the administration that continue to prioritise Russia over transatlantic relations.”

Another senior European diplomat said: “We are assessing. The main thing is to remain cool and work for a more reasonable outcome.”

Ursula von der Leyen, European Commission president, said on Friday that she would “discuss the situation both with European leaders and with other leaders here at the sidelines of the G20”.

“Nothing about Ukraine without Ukraine,” von der Leyen said, repeating the EU’s long-standing position on peace negotiations.

Recommended

In addition to the demand for Ukraine to relinquish control over territory that it currently controls, European officials were most alarmed by the proposal banning Nato forces from being stationed in Ukraine and its call for $100bn of frozen Russian sovereign assets to be deployed into reconstruction projects that would earn profits for the US.

European defence stocks, which have soared this year as the continent has stepped up its spending commitments, dropped sharply on Friday morning. The Stoxx 600 aerospace and defence index fell 2.6 per cent.

Shares in Renk Group sank 8.2 per cent, Rheinmetall dropped 5.7 per cent and Hensoldt fell 5.2 per cent in morning trading in Europe.

The price of oil also dropped, with the international benchmark Brent crude down 1.2 per cent to $62.64 a barrel. Gas prices fell, with futures on the European benchmark TTF down 0.8 per cent to €30.48 per megawatt hour.

Rustem Umerov, secretary of Ukraine’s national security and defence council, on Friday denied reports that he had agreed to parts of the proposal and secured a change to a section that had called for an audit of international aid to Kyiv.

“We are carefully studying all the partners’ proposals, expecting the same respectful attitude towards the Ukrainian position,” Umerov said.

“We are thoughtfully processing the partners’ proposals within the framework of Ukraine’s unchanging principles — sovereignty, people’s security and a just peace.”

Additional reporting by Ben Hall, Emily Herbert and Anna Gross

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Generative AI in the Real World: The LLMOps Shift with Abi Aryan – O’Reilly

1 Share
  • LLMOps Evolution: Discussion on the shift from MLOps to LLMOps, highlighting changes in infrastructure and pipelines for handling larger, generative models.
  • Tool Changes for Engineers: Daily tools for LLMOps focus on cost control, latency, throughput, and dynamic monitoring, differing from traditional MLOps containerization.
  • Transition Challenges: MLOps engineers upskill through evaluations and infrastructure, while software engineers adapt easier viewing LLMs as end-to-end systems.
  • Agentic Systems Definition: Agents handle intent understanding and execution, workflows manage planning, treating systems as distributed computing problems.
  • Reliability in Pipelines: Compounding errors in multi-agent workflows require microservices architecture, consensus mechanisms, and research like MassGen.
  • Framework Limitations: Many agentic and evaluation frameworks are unreliable or bloated, leading companies to develop in-house solutions for debugging and optimization.
  • Observability Approach: Existing tools suffice with new frameworks for anomaly detection via logging and ML models, emphasizing telemetry over static dashboards.
  • Future Priorities: Emphasis on model profiling, dynamic RL environments with curriculum learning, data engineering gains, and learning distributed systems like Ray.

Generative AI in the Real World

Generative AI in the Real World

Generative AI in the Real World: The LLMOps Shift with Abi Aryan

Loading

00:00 / 32m 16s

RSS Feed

Share

[](https://www.facebook.com/sharer/sharer.php?u=https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-the-llmops-shift-with-abi-aryan/&t=Generative AI in the Real World: The LLMOps Shift with Abi Aryan "Share on Facebook")[](https://twitter.com/intent/tweet?text=https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-the-llmops-shift-with-abi-aryan/&url=Generative AI in the Real World: The LLMOps Shift with Abi Aryan "Share on Twitter")

Link

Embed

MLOps is dead. Well, not really, but for many the job is evolving into LLMOps. In this episode, Abide AI founder and LLMOps author Abi Aryan joins Ben to discuss what LLMOps is and why it’s needed, particularly for agentic AI systems. Listen in to hear why LLMOps requires a new way of thinking about observability, why we should spend more time understanding human workflows before mimicking them with agents, how to do FinOps in the age of generative AI, and more.

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Check out other episodes of this podcast on the O’Reilly learning platform.

Transcript

This transcript was created with the help of AI and has been lightly edited for clarity.

00.00: All right, so today we have Abi Aryan. She is the author of the O’Reilly book on LLMOps as well as the founder of Abide AI. So, Abi, welcome to the podcast. 

00.19: Thank you so much, Ben. 

00.21: All right. Let’s start with the book, which I confess, I just cracked open: LLMOps. People probably listening to this have heard of MLOps. So at a high level, the models have changed: They’re bigger, they’re generative, and so on and so forth. So since you’ve written this book, have you seen a wider acceptance of the need for LLMOps? 

00.51: I think more recently there are more infrastructure companies. So there was a conference happening recently, and there was this sort of perception or messaging across the conference, which was “MLOps is dead.” Although I don’t agree with that. 

There’s a big difference that companies have started to pick up on more recently, as the infrastructure around the space has sort of started to improve. They’re starting to realize how different the pipelines were that people managed and grew, especially for the older companies like Snorkel that were in this space for years and years before large language models came in. The way they were handling data pipelines—and even the observability platforms that we’re seeing today—have changed tremendously.

01.40: What about, Abi, the general. . .? We don’t have to go into specific tools, but we can if you want. But, you know, if you look at the old MLOps person and then fast-forward, this person is now an LLMOps person. So on a day-to-day basis [has] their suite of tools changed? 

02.01: Massively. I think for an MLOps person, the focus was very much around “This is my model. How do I containerize my model, and how do I put it in production?” That was the entire problem and, you know, most of the work was around “Can I containerize it? What are the best practices around how I arrange my repository? Are we using templates?” 

Drawbacks happened, but not as much because most of the time the stuff was tested and there was not too much indeterministic behavior within the models itself. Now that has changed.

02.38: [For] most of the LLMOps engineers, the biggest job right now is doing FinOps really, which is controlling the cost because the models are massive. The second thing, which has been a big difference, is we have shifted from “How can we build systems?” to “How can we build systems that can perform, and not just perform technically but perform behaviorally as well?”: “What is the cost of the model? But also what is the latency? And see what’s the throughput looking like? How are we managing the memory across different tasks?” 

The problem has really shifted when we talk about it. . . So a lot of focus for MLOps was “Let’s create fantastic dashboards that can do everything.” Right now it’s no matter which dashboard you create, the monitoring is really very dynamic. 

03.32: Yeah, yeah. As you were talking there, you know, I started thinking, yeah, of course, obviously now the inference is essentially a distributed computing problem, right? So that was not the case before. Now you have different phases even of the computation during inference, so you have the prefill phase and the decode phase. And then you might need different setups for those. 

So anecdotally, Abi, did the people who were MLOps people successfully migrate themselves? Were they able to upskill themselves to become LLMOps engineers?

04.14: I know a couple of friends who were MLOps engineers. They were teaching MLOps as well—Databricks folks, MVPs. And they were now transitioning to LLMOps.

But the way they started is they started focusing very much on, “Can you do evals for these models? They weren’t really dealing with the infrastructure side of it yet. And that was their slow transition. And right now they’re very much at that point where they’re thinking, “OK, can we make it easy to just catch these problems within the model—inferencing itself?”

04.49: A lot of other problems still stay unsolved. Then the other side, which was like a lot of software engineers who entered the field and became AI engineers, they have a much easier transition because software. . . The way I look at large language models is not just as another machine learning model but literally like software 3.0 in that way, which is it’s an end-to-end system that will run independently.

Now, the model isn’t just something you plug in. The model is the product tree. So for those people, most software is built around these ideas, which is, you know, we need a strong cohesion. We need low coupling. We need to think about “How are we doing microservices, how the communication happens between different tools that we’re using, how are we calling up our endpoints, how are we securing our endpoints?”

Those questions come easier. So the system design side of things comes easier to people who work in traditional software engineering. So the transition has been a little bit easier for them as compared to people who were traditionally like MLOps engineers. 

05.59: And hopefully your book will help some of these MLOps people upskill themselves into this new world.

Let’s pivot quickly to agents. Obviously it’s a buzzword. Just like anything in the space, it means different things to different teams. So how do you distinguish agentic systems yourself?

06.24: There are two words in the space. One is agents; one is agent workflows. Basically agents are the components really. Or you can call them the model itself, but they’re trying to figure out what you meant, even if you forgot to tell them. That’s the core work of an agent. And the work of a workflow or the workflow of an agentic system, if you want to call it, is to tell these agents what to actually do. So one is responsible for execution; the other is responsible for the planning side of things. 

07.02: I think sometimes when tech journalists write about these things, the general public gets the notion that there’s this monolithic model that does everything. But the reality is, most teams are moving away from that design as you, as you describe.

So they have an agent that acts as an orchestrator or planner and then parcels out the different steps or tasks needed, and then maybe reassembles in the end, right?

07.42: Coming back to your point, it’s now less of a problem of machine learning. It’s, again, more like a distributed systems problem because we have multiple agents. Some of these agents will have more load—they will be the frontend agents, which are communicating to a lot of people. Obviously, on the GPUs, these need more distribution.

08.02: And when it comes to the other agents that may not be used as much, they can be provisioned based on “This is the need, and this is the availability that we have.” So all of that provisioning again is a problem. The communication is a problem. Setting up tests across different tasks itself within an entire workflow, now that becomes a problem, which is where a lot of people are trying to implement context engineering. But it’s a very complicated problem to solve. 

08.31: And then, Abi, there’s also the problem of compounding reliability. Let’s say, for example, you have an agentic workflow where one agent passes off to another agent and yet to another third agent. Each agent may have a certain amount of reliability, but it compounds over time. So it compounds across this pipeline, which makes it more challenging. 

09.02: And that’s where there’s a lot of research work going on in the space. It’s an idea that I’ve talked about in the book as well. At that point when I was writing the book, especially chapter four, in which a lot of these were described, most of the companies right now are [using] monolithic architecture, but it’s not going to be able to sustain as we go towards application.

We have to go towards a microservices architecture. And the moment we go towards microservices architecture, there are a lot of problems. One will be the hardware problem. The other is consensus building, which is. . . 

Let’s say you have three different agents spread across three different nodes, which would be running very differently. Let’s say one is running on an edge one hundred; one is running on something else. How can we achieve consensus if even one of the nodes ends up winning? So that’s open research work [where] people are trying to figure out, “Can we achieve consensus in agents based on whatever answer the majority is giving, or how do we really think about it?” It should be set up at a threshold at which, if it’s beyond this threshold, then you know, this perfectly works.

One of the frameworks that is trying to work in this space is called MassGen—they’re working on the research side of solving this problem itself in terms of the tool itself. 

10.31: By the way, even back in the microservices days in software architecture, obviously people went overboard too. So I think that, as with any of these new things, there’s a bit of trial and error that you have to go through. And the better you can test your systems and have a setup where you can reproduce and try different things, the better off you are, because many times your first stab at designing your system may not be the right one. Right? 

11.08: Yeah. And I’ll give you two examples of this. So AI companies tried to use a lot of agentic frameworks. You know people have used Crew; people have used n8n, they’ve used. . . 

11.25: Oh, I hate those! Not I hate. . . Sorry. Sorry, my friends and crew. 

11.30: And 90% of the people working in this space seriously have already made that transition, which is “We are going to write it ourselves. 

The same happened for evaluation: There were a lot of evaluation tools out there. What they were doing on the surface is literally just tracing, and tracing wasn’t really solving the problem—it was just a beautiful dashboard that doesn’t really serve much purpose. Maybe for the business teams. But at least for the ML engineers who are supposed to debug these problems and, you know, optimize these systems, essentially, it was not giving much other than “What is the error response that we’re getting to everything?”

12.08: So again, for that one as well, most of the companies have developed their own evaluation frameworks in-house, as of now. The people who are just starting out, obviously they’ve done. But most of the companies that started working with large language models in 2023, they’ve tried every tool out there in 2023, 2024. And right now more and more people are staying away from the frameworks and launching and everything.

People have understood that most of the frameworks in this space are not superreliable.

12.41: And [are] also, honestly, a bit bloated. They come with too many things that you don’t need in many ways. . .

12:54: Security loopholes as well. So for example, like I reported one of the security loopholes with LangChain as well, with LangSmith back in 2024. So those things obviously get reported by people [and] get worked on, but the companies aren’t really proactively working on closing those security loopholes. 

13.15: Two open source projects that I like that are not specifically agentic are DSPy and BAML. Wanted to give them a shout out. So this point I’m about to make, there’s no easy, clear-cut answer. But one thing I noticed, Abi, is that people will do the following, right? I’m going to take something we do, and I’m going to build agents to do the same thing. But the way we do things is I have a—I’m just making this up—I have a project manager and then I have a designer, I have role B, role C, and then there’s certain emails being exchanged.

So then the first step is “Let’s replicate not just the roles but kind of the exchange and communication.” And sometimes that actually increases the complexity of the design of your system because maybe you don’t need to do it the way the humans do it. Right? Maybe if you go to automation and agents, you don’t have to over-anthropomorphize your workflow. Right. So what do you think about this observation? 

14.31: A very interesting analogy I’ll give you is people are trying to replicate intelligence without understanding what intelligence is. The same for consciousness. Everybody wants to replicate and create consciousness without understanding consciousness. So the same is happening with this as well, which is we are trying to replicate a human workflow without really understanding how humans work.

14.55: And sometimes humans may not be the most efficient thing. Like they exchange five emails to arrive at something. 

15.04: And humans are never context defined. And in a very limiting sense. Even if somebody’s job is to do editing, they’re not just doing editing. They are looking at the flow. They are looking for a lot of things which you can’t really define. Obviously you can over a period of time, but it needs a lot of observation to understand. And that skill also depends on who the person is. Different people have different skills as well. Most of the agentic systems right now, they’re just glorified Zapier IFTTT routines. That’s the way I look at them right now. The if recipes: If this, then that.

15.48: Yeah, yeah. Robotic process automation I guess is what people call it. The other thing that people I don’t think understand just reading the popular tech press is that agents have levels of autonomy, right? Most teams don’t actually build an agent and unleash it full autonomous from day one.

I mean, I guess the analogy would be in self-driving cars: They have different levels of automation. Most enterprise AI teams realize that with agents, you have to kind of treat them that way too, depending on the complexity and the importance of the workflow. 

So you go first very much a human is involved and then less and less human over time as you develop confidence in the agent.

But I think it’s not good practice to just kind of let an agent run wild. Especially right now. 

16.56: It’s not, because who’s the person answering if the agent goes wrong? And that’s a question that has come up often. So this is the work that we’re doing at Abide really, which is trying to create a decision layer on top of the knowledge retrieval layer.

17.07: Most of the agents which are built using just large language models. . . LLMs—I think people need to understand this part—are fantastic at knowledge retrieval, but they do not know how to make decisions. If you think agents are independent decision makers and they can figure things out, no, they cannot figure things out. They can look at the database and try to do something.

Now, what they do may or may not be what you like, no matter how many rules you define across that. So what we really need to develop is some sort of symbolic language around how these agents are working, which is more like trying to give them a model of the world around “What is the cause and effect, with all of these decisions that you’re making? How do we prioritize one decision where the. . .? What was the reasoning behind that so that entire decision making reasoning here has been the missing part?”

18.02: You brought up the topic of observability. There’s two schools of thought here as far as agentic observability. The first one is we don’t need new tools. We have the tools. We just have to apply [them] to agents. And then the second, of course, is this is a new situation. So now we need to be able to do more. . . The observability tools have to be more capable because we’re dealing with nondeterministic systems.

And so maybe we need to capture more information along the way. Chains of decision, reasoning, traceability, and so on and so forth. Where do you fall in this kind of spectrum of we don’t need new tools or we need new tools? 

18.48: We don’t need new tools, but we certainly need new frameworks, and especially a new way of thinking. Observability in the MLOps world—fantastic; it was just about tools. Now, people have to stop thinking about observability as just visibility into the system and start thinking of it as an anomaly detection problem. And that was something I’d written in the book as well. Now it’s no longer about “Can I see what my token length is?” No, that’s not enough. You have to look for anomalies at every single part of the layer across a lot of metrics. 

19.24: So your position is we can use the existing tools. We may have to log more things. 

19.33: We may have to log more things, and then start building simple ML models to be able to do anomaly detection. 

Think of managing any machine, any LLM model, any agent as really like a fraud detection pipeline. So every single time you’re looking for “What are the simplest signs of fraud?” And that can happen across various factors. But we need more logging. And again you don’t need external tools for that. You can set up your own loggers as well.

Most of the people I know have been setting up their own loggers within their companies. So you can simply use telemetry to be able to a.) define a set and use the general logs, and b.) be able to define your own custom logs as well, depending on your agent pipeline itself. You can define “This is what it’s trying to do” and log more things across those things, and then start building small machine learning models to look for what’s going on over there.

20.36: So what is the state of “Where we are? How many teams are doing this?” 

20.42: Very few. Very, very few. Maybe just the top bits. The ones who are doing reinforcement learning training and using RL environments, because that’s where they’re getting their data to do RL. But people who are not using RL to be able to retrain their model, they’re not really doing much of this part; they’re still depending very much on external accounts.

21.12: I’ll get back to RL in a second. But one topic you raised when you pointed out the transition from MLOps to LLMOps was the importance of FinOps, which is, for our listeners, basically managing your cloud computing costs—or in this case, increasingly mastering token economics. Because basically, it’s one of these things that I think can bite you.

For example, the first time you use Claude Code, you go, “Oh, man, this tool is powerful.” And then boom, you get an email with a bill. I see, that’s why it’s powerful. And you multiply that across the board to teams who are starting to maybe deploy some of these things. And you see the importance of FinOps.

So where are we, Abi, as far as tooling for FinOps in the age of generative AI and also the practice of FinOps in the age of generative AI? 

22.19: Less than 5%, maybe even 2% of the way there. 

22:24: Really? But obviously everyone’s aware of it, right? Because at some point, when you deploy, you become aware. 

22.33: Not enough people. A lot of people just think about FinOps as cloud, basically the cloud cost. And there are different kinds of costs in the cloud. One of the things people are not doing enough is not profiling their models properly, which is [determining] “Where are the costs really coming from? Our models’ compute power? Are they taking too much RAM? 

22.58: Or are we using reasoning when we don’t need it?

23.00: Exactly. Now that’s a problem we solve very differently. That’s where yes, you can do kernel fusion. Define your own custom kernels. Right now there’s a massive number of people who think we need to rewrite kernels for everything. It’s only going to solve one problem, which is the compute-bound problem. But it’s not going to solve the memory-bound problem. Your data engineering pipelines aren’t what’s going to solve your memory-bound problems.

And that’s where most of the focus is missing. I’ve mentioned it in the book as well: Data engineering is the foundation of first being able to solve the problems. And then we moved to the compute-bound problems. Do not start optimizing the kernels over there. And then the third part would be the communication-bound problem, which is “How do we make these GPUs talk smarter with each other? How do we figure out the agent consensus and all of those problems?”

Now that’s a communication problem. And that’s what happens when there are different levels of bandwidth. Everybody’s dealing with the internet bandwidth as well, the kind of serving speed as well, different kinds of cost and every kind of transitioning from one node to another. If we’re not really hosting our own infrastructure, then that’s a different problem, because it depends on “Which server do you get assigned your GPUs on again?”

24.20: Yeah, yeah, yeah. I want to give a shout out to Ray—I’m an advisor to Anyscale—because Ray basically is built for these sorts of pipelines because it can do fine-grained utilization and help you decide between CPU and GPU. And just generally, you don’t think that the teams are taking token economics seriously?

I guess not. How many people have I heard talking about caching, for example? Because if it’s a prompt that [has been] answered before, why do you have to go through it again? 

25.07: I think plenty of people have started implementing KV caching, but they don’t really know. . . Again, one of the questions people don’t understand is “How much do we need to store in the memory itself, and how much do we need to store in the cache?” which is the big memory question. So that’s the one I don’t think people are able to solve. A lot of people are storing too much stuff in the cache that should actually be stored in the RAM itself, in the memory.

And there are generalist applications that don’t really understand that this agent doesn’t really need access to the memory. There’s no point. It’s just lost in the throughput really. So I think the problem isn’t really caching. The problem is that differentiation of understanding for people. 

25.55: Yeah, yeah, I just threw that out as one element. Because obviously there’s many, many things to mastering token economics. So you, you brought up reinforcement learning. A few years ago, obviously people got really into “Let’s do fine-tuning.” But then they quickly realized. . . And actually fine-tuning became easy because basically there became so many services where you can just focus on labeled data. You upload your labeled data, boom, come back from lunch, you have a fine-tuned model.

But then people realize that “I fine-tuned, but the model that results isn’t really as good as my fine-tuning data.” And then obviously RAG and context engineering came into the picture. Now it seems like more people are again talking about reinforcement learning, but in the context of LLMs. And there’s a lot of libraries, many of them built on Ray, for example. But it seems like what’s missing, Abi, is that fine-tuning got to the point where I can sit down a domain expert and say, “Produce labeled data.” And basically the domain expert is a first-class participant in fine-tuning.

As best I can tell, for reinforcement learning, the tools aren’t there yet. The UX hasn’t been figured out in order to bring in the domain experts as the first-class citizen in the reinforcement learning process—which they need to be because a lot of the stuff really resides in their brain. 

27.45: The big problem here, and very, very much to the point of what you pointed out, is the tools aren’t really there. And one very specific thing I can tell you is most of the reinforcement learning environments that you’re seeing are static environments. Agents are not learning statically. They are learning dynamically. If your RL environment cannot adapt dynamically, which basically in 2018, 2019, emerged as the OpenAI Gym and a lot of reinforcement learning libraries were coming out.

28.18: There is a line of work called curriculum learning, which is basically adapting your model’s difficulty to the results itself. So basically now that can be used in reinforcement learning, but I’ve not seen any practical implementation of using curriculum learning for reinforcement learning environments. So people create these environments—fantastic. They work well for a little bit of time, and then they become useless.

So that’s where even OpenAI, Anthropic, those companies are struggling as well. They’ve paid heavily in contracts, which are yearlong contracts to say, “Can you build this vertical environment? Can you build that vertical environment?” and that works fantastically But once the model learns on it, then there’s nothing else to learn. And then you go back into the question of, “Is this data fresh? Is this adaptive with the world?” And it becomes the same RAG problem over again. 

29.18: So maybe the problem is with RL itself. Maybe maybe we need a different paradigm. It’s just too hard. 

Let me close by looking to the future. The first thing is—the space is moving so hard, this might be an impossible question to ask, but if you look at, let’s say, 6 to 18 months, what are some things in the research domain that you think are not being talked enough about that might produce enough practical utility that we will start hearing about them in 6 to 12, 6 to 18 months?

29.55: One is how to profile your machine learning models, like the entire systems end-to-end. A lot of people do not understand them as systems, but only as models. So that’s one thing which will make a massive amount of difference. There are a lot of AI engineers today, but we don’t have enough system design engineers.

30.16: This is something that Ion Stoica at Sky Computing Lab has been giving keynotes about. Yeah. Interesting. 

30.23: The second part is. . . I’m optimistic about seeing curriculum learning applied to reinforcement learning as well, where our RL environments can adapt in real time so when we train agents on them, they are dynamically adapting as well. That’s also [some] of the work being done by labs like Circana, which are working in artificial labs, artificial light frame, all of that stuff—evolution of any kind of machine learning model accuracy. 

30.57: The third thing where I feel like the communities are falling behind massively is on the data engineering side. That’s where we have massive gains to get. 

31.09: So on the data engineering side, I’m happy to say that I advise several companies in the space that are completely focused on tools for these new workloads and these new data types. 

Last question for our listeners: What mindset shift or what skill do they need to pick up in order to position themselves in their career for the next 18 to 24 months?

31.40: For anybody who’s an AI engineer, a machine learning engineer, an LLMOps engineer, or an MLOps engineer, first learn how to profile your models. Start picking up Ray very quickly as a tool to just get started on, to see how distributed systems work. You can pick the LLM if you want, but start understanding distributed systems first. And once you start understanding those systems, then start looking back into the models itself. 

32.11: And with that, thank you, Abi.

Read the whole story
bogorad
1 hour ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Trump's '28-point plan' for Ukraine War provokes political earthquake | Responsible Statecraft

1 Share
  • Churchill quote: Applied to reported US-Russia draft framework as end of beginning in Ukraine peace process.
  • Negotiators: US envoy Steve Witkoff, Russian envoy Kirill Dmitriev, possibly JD Vance, Marco Rubio, Jared Kushner.
  • Deal status: Trump administration sees imminent agreement; Russia stresses none reached; uncertainties on concessions and Ukrainian acceptance.
  • Donbas terms: Ukraine withdraws from 14% held area, to be demilitarized under neutral peacekeepers.
  • Other regions: Ceasefire along front lines in Zaporizhia and Kherson; Russia abandons full provincial claims.
  • Sovereignty concessions: US recognizes Russian control over Donbas and Crimea, implying sanctions relief; Ukraine not required to.
  • Military and security: No long-range missiles for Ukraine, force size limits, US guarantees, Russian okay for EU but not NATO membership.
  • Additional elements: Russian as second official language in Ukraine; proposals for UN/BRICS ratification, suspended sanctions, stockpiled arms.

When it comes to the reported draft framework agreement between the U.S. and Russia, and its place in the Ukraine peace process, a quote by Winston Churchill (on the British victory at El Alamein) may be appropriate: “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.” This is because at long last, this document engages with the concrete, detailed issues that will have to be resolved if peace is to be achieved.

The plan has apparently been worked out between U.S. envoy Steve Witkoff and Russian envoy Kirill Dmitriev (together reportedly with Vice President JD Vance, Secretary of State Marco Rubio and the president's son-in-law Jared Kushner) but a great deal about it is highly unclear. The Trump administration reportedly believes that a deal is imminent, but the Russian government has been at pains to stress that no agreement has yet been reached. We do not know if Moscow will try to exact further concessions; the details of several key points have not been revealed; and above all, it may be impossible to get the Ukrainian government to agree to essential elements, unless the Trump administration is prepared to bring extremely heavy pressure to bear on both Ukraine and America’s European allies.

It has already been reported that President Zelensky has rejected the plan and is working with European governments to propose an alternative — though so far, nothing that the Europeans have proposed stands the remotest chance of being accepted by Moscow.

Among the most difficult points for Ukraine will be the reported draft agreement that Ukraine should withdraw from the approximately 14% of the Donbas that it still holds, and that it has sacrificed tens of thousands of lives to retain. But with the key Ukrainian town of Pokrovsk seemingly close to falling, the Trump administration apparently believes that the rest of the Donbas is sooner or later bound to fall too, and there is no point in losing more Ukrainian lives in a vain attempt to keep it, and also risk Ukrainian military collapse and losing more territory beyond the Donbas.

The draft agreement also reportedly softens the blow for Ukraine by stating that the area handed over will be demilitarized and controlled by neutral peacekeepers. In the other two provinces claimed (but only partially occupied) by Russia, Zaporizhia and Kherson, the ceasefire line will run along the existing front line, and Russia will abandon its demand for the whole of these provinces.

In a huge concession to Russia however, the Trump administration — and possibly other countries like Turkey and Qatar, that helped mediate this proposed deal — is willing to recognize Russian legal sovereignty over the Donbas and Crimea (which would also imply the lifting of many U.S. sanctions on Russia) though it does not expect Ukraine to do so.

The draft agreement apparently excludes long-range missiles for Ukraine and would impose limits on the size of the Ukrainian armed forces, though we do not know how great these limits will be. The Ukrainian government agreed to the principle of arms limitations at the Istanbul talks in March 2022, but has since categorically rejected the idea.

The draft agreement also reportedly includes unspecified U.S. security guarantees to Ukraine, and a formal Russian acknowledgment (already stated by President Putin and Foreign Minister Lavrov) of Ukraine’s right to join the European Union, in return for the exclusion of NATO membership for Ukraine. It has not been revealed however whether this would require a change to the Ukrainian constitution to restore the previous commitment to neutrality, something that could be hard to pass through the Ukrainian parliament.

This is also true of another key element of the reported plan — the establishment of Russian as a second official language in Ukraine. This is a neuralgic issue for Ukrainian ethnic nationalists, but they should recognize and respond with gratitude to the fact that in the face of the Russian invasion the great majority of Russians and Russian speakers have remained loyal to Ukraine.

Predictably, the leaked plan has drawn immediate denunciation from both Ukrainian and Western sources, with it being described as a demand for Ukraine’s “capitulation.” This is mistaken. As the Quincy Institute has long pointed out, an agreement that leaves three quarters of Ukraine independent and with a path to EU membership would in fact be a Ukrainian victory, albeit a qualified one.

This should be obvious if you look at the Russian government’s goal at the start of the war of turning the whole of Ukraine into a client state, or alternatively of seizing the whole of eastern and southern Ukraine. It would also be a Ukrainian victory in terms of the 500-year-long history of Russian, Polish and Turkish rule over Ukraine. And by way of additional evidence, you would only have to listen to the howls of protest that an agreement along these lines will evoke from Russian hardliners, who still dream of achieving Russian maximalist aims. European comments that this draft agreement concedes Russia’s “maximalist demands” are therefore nonsense.

When it comes to the Western security guarantees to Ukraine promised (but not specified) in the draft agreement, it is crucial to recognize that in international affairs and in history there is no such thing as an absolute guarantee, let alone a permanent one. There are however a whole set of commitments that can be included in order to deter future Russian aggression: the peace agreement should be ratified by the U.N. Security Council and endorsed by the BRICS; Western economic sanctions should be not ended but suspended, with a snap-back clause stating that they will automatically resume if Russia resumes aggression; designated long-range missiles and other arms can be stockpiled with a legally binding guarantee that they will be provided to Ukraine if Russia restarts the war.

Above all, Ukraine should retain the complete and guaranteed right to receive and develop the defensive weapons that throughout this war have played a key part in slowing the Russian advance to a crawl and inflicting immense casualties on the Russian army. Because in the end, the greatest deterrent by far against Russia starting a new war is how badly its armed forces have suffered and performed in this war. If Russia has achieved its basic stated goals in Ukraine, would any future Russian government really want to go through this again?

Certain Western officials, politicians, and commentators believe, and have stated openly, that keeping the Ukraine War going is “money well spent” because it weakens Russia without sacrificing U.S. lives. But apart from the deep immorality of sacrificing Ukrainian lives for this goal, the longer the war goes on the greater the risk that Ukraine will suffer a far greater defeat, Russia a far greater victory, and the U.S. a far greater humiliation.

Given the growing evidence of Ukrainian military weakness and Russian ability to press forward with its offensives, simple prudence dictates the search for an early peace on reasonable terms. That is what the present plan promises, and everyone who truly has Ukraine’s and Europe’s interests at heart should support it.

Read the whole story
bogorad
3 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete

Stacey Plaskett's Nightmare Week Continues

1 Share
  • Frankfurt reference: Quotes "On Bullshit" to describe empty speech during Plaskett's House floor defense.
  • WaPo revelations: Reports text messages showing Epstein coached Plaskett on questioning Cohen in 2019 hearing.
  • Censure resolution: Failed after targeting Plaskett for Epstein involvement.
  • Constituent claim: Plaskett describes Epstein as a constituent sending texts amid others after viral moment.
  • RONA exchange: Epstein explains "RONA" as Trump's assistant Rhona Graff; Plaskett asks Cohen about her.
  • Advice denial: Plaskett asserts no need for advice on questioning, cites prosecutor experience.
  • Additional texts: Epstein compliments outfit, notes chewing; exchanges span seven hours from morning.
  • WaPo critique: Plaskett waves transcript, accuses selective 30-second clip ignoring full five-minute questioning.

[

](https://substackcdn.com/image/fetch/$s_!jfoi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65a19e89-2d54-486e-8b19-f02f7c76cba2_1024x528.jpeg)

Late Tuesday night, I recalled the late Harry G. Frankfurt’s increasingly relevant, “On Bullshit.”

When we characterize talk as hot air, we mean that what comes out of the speaker’s mouth is only that. It is mere vapor. His speech is empty, without substance or content. His use of language, accordingly, does not contribute to the purpose it purports to serve. No more information is communicated than if the speaker had merely exhaled.

I thought of that book as I was watching Democratic Delegate Stacey Plaskett’s performance on the House floor as she defended herself on a censure resolution inspired by revelations in the Washington Post that she received coaching from world-renowned scumbag Jeffrey Epstein during a 2019 hearing, as she prepared to question former Trump lawyer and fixer Michael Cohen.

The Post story and accompanying video are damning. Of course, the revelations are a bigger story because they involve Epstein. He was still a few months out from being charged with a slew of sex trafficking offenses, but he was still a registered sex offender, as she effectively said, “Jeffrey, take the wheel.” However, this is bad no matter who was driving her questioning in real time. She was nothing more than a ventriloquist dummy, and if you haven’t seen the video, judge for yourself:

Epstein was only a constituent

The censure resolution failed, but make no mistake that her performance was “Mission Accomplished”-level bull. Plaskett starts by heaping praise on herself for a “moment that went viral” earlier in the hearing during an interaction with Republican Congressman Jim Jordan. That, she claims, prompted “innumerable texts” from friends, foes, and constituents.

And I got a text from Jeffrey Epstein, who at the time was my constituent, who was not public knowledge at that time that he was under federal investigation and, who was sharing information with me.

So of all those supposed “innumerable texts,” Epstein just happened to stand out as if he were any other constituent. It was just a coincidence, then, that a few months earlier she landed $30,000 from him for the Democratic Congressional Campaign Committee and that he had already given maximum amounts to her own campaigns. Yes, he was a constituent, but so what? Bob Menendez also had “constituents.”

“I don’t need to get advice…”

Now I heard recently from someone that I was taking advice from him. Let me tell you something: I don’t need to get advice on how to question anybody from any individual.”

This one is my personal favorite because the video from 2019 clearly shows she did need advice!

Epstein pointed out to Plaskett that “Cohen brought up RONA - keeper of the secrets.”

“RONA” is Rhona Graff, a former executive assistant to Trump. Plaskett was thrown for a loop.

“RONA ??” she texted back.

Forty-one seconds went by. She still couldn’t figure it out.

Plaskett: “Quick I’m up next is that an acronym?”Epstein: “Thats his assistant.”

Just in time. She proceeded to ask Cohen three questions about Graff, twice referring to her as “Ms. Rona.” Plaskett grew up in Brooklyn, so I doubt she was exhibiting Southern pleasantries.

In any case, Epstein was pleased. “Good work,” he texted.

Plaskett also mentioned her experience as a narcotics prosecutor in New York and her appointment to the Justice Department after 9/11. “I know how to question individuals,” she said.

The Washington Post

Plaskett takes aim at the Post, saying she questioned Cohen for five minutes. In case there’s any doubt, she waves a transcript as if to say, Proof! Plaskett then complains, “The Washington Post only shows you 30 seconds and takes from it one individual’s name that I got from Jeffrey Epstein, and didn’t know who the individual was.”

This 30 seconds claim is more nonsense. While the video shows Plaskett questioning Cohen for roughly 30 seconds, most of it shows her and the text exchanges just before her questioning begins. The text exchange concerning “Ms. Rona” spanned about 10 minutes, although not continuously. The first text exchange, however, occurred several hours earlier, before the hearing began.

In fact, there were many more texts throughout the day that were reported in the Post’s story but weren’t shown in the video, which of course Plaskett conveniently ignored. The exchanges, after all, indicated they were buddies.

At 10:02 a.m., Epstein texted Plaskett: “Great outfit”

“You look great,” he added at 10:22 a.m. “Thanks!” she replied shortly afterward.

At around 10:40 a.m., a broadcast feed cut to Plaskett, showing her moving her mouth as if she were chewing something.

At 10:41 a.m., Epstein sent this message to Plaskett: “Are you chewing”

“Not any more,” she replied. “Chewing interior of my mouth. Bad habit from middle school”

At 12:50 p.m., Epstein asks: “How much longer for you”

“Hours. Go to other mtgs,” she replied.

The intermittent exchanges spanned nearly seven hours, from 7:55 a.m. to at least 2:34 p.m. There is no substance to this twisted version of the record, just a deeper pile of bull.

“Weaponized it for Political Theater”

Plaskett now presented herself as a victim of government weaponization:

“They’ve taken a text exchange, which show no participation, no assistance, no involvement in any illegal activity, and weaponized it for political theater, because that’s what this is.

This is another trait of a good bullshitter: Imply your critics are doing something they’re not. No one accused Plaskett of doing something illegal by texting Epstein. It was just a way to deflect from her actions.

Fortunately, she’s not a voting member of Congress since she represents the Virgin Islands. Obviously, though, she still has a large platform. Who’s coaching her now?

Read the whole story
bogorad
4 hours ago
reply
Barcelona, Catalonia, Spain
Share this story
Delete
Next Page of Stories