- Technological Impact: Artificial intelligence agents are being deployed to automate complex white-collar tasks, impacting industries such as law, finance, and cyber security.
- Trust Requirements: High-stakes sectors prioritize authoritative, traceable, and accountable outcomes, viewing these standards as critical differentiators over mere processing speed.
- Industry Exposure: Market vulnerability varies significantly, with unregulated sectors or those reliant on public data facing higher disruption risks than those requiring proprietary data and domain expertise.
- Enterprise Integration: Existing software providers are incorporating AI plug-ins to enhance efficiency, though observers note these cannot replace fundamental systems of record or established regulatory frameworks.
- Financial Evaluation: Public market sentiment remains focused on which firms can effectively capture value, with software companies under pressure to demonstrate clear return on investment through upselling.
- Cyber Security Dynamics: While AI models demonstrate improved capability in identifying code vulnerabilities, human expertise remains essential for broad deterministic reasoning and handling active adversarial threats.
- Operational Limitations: Current AI tools are described as ineffective at high-level judgment and prone to hallucinations, rendering them unsuitable for scenarios requiring absolute precision and human accountability.
- Strategic Collaboration: AI developers acknowledge professional limitations, positioning their software as complementary tools to boost efficiency rather than as replacements for comprehensive enterprise platforms.
AI “agents” are threatening to transform white-collar work, offering cheaper, faster software that can perform the tasks of lawyers, bankers and accountants.
The reality is more complicated.
Since the start of the year, AI companies such as Anthropic have released tools designed to automate tasks across professions, prompting a sharp reassessment of stocks from cyber security to financial services.
But interviews with a dozen chief executives, investors and analysts across sectors suggest not all parts of the market are equally exposed. In industries where accuracy, accountability and regulation matter, trust is emerging as the critical faultline.
“Fiduciary professions like law, tax, audit and compliance require more than . . . accelerating everyday knowledge work,” said Steve Hasker, chief executive of Thomson Reuters.
“What ultimately matters is whether the output is authoritative, traceable and accountable to professional standards. In high-stakes environments, speed alone isn’t the differentiator. Trust is.”

Anthropic released a slew of tools this year that allow professionals from lawyers to bankers to infuse their day-to-day tasks with AI capabilities © Jeenah Moon/Bloomberg
Despite bullishness among some business leaders, there is broad agreement that companies must evolve in the face of products like Anthropic’s Claude Cowork or face extinction.
Customer services software companies, as well as sectors that are unregulated or use public data, such as data aggregators or marketing content firms, are all viewed as under threat.
“It is indisputable that the technology is very powerful,” said Victor Englesson, a partner at Nordic-headquartered investment firm EQT Partners, who focuses on software and technology investments.
“The key question, which the public market has voted on, is who is best positioned to capture that value. At the core, the size of the pie has grown. There will be winners and losers, but it will come down to execution.”
Investors and executives said they were drawing up war plans to grow and reallocate capital, after Anthropic released a slew of tools this year that allow professionals from lawyers to bankers to infuse their day-to-day tasks with AI capabilities.
The new products are an extension of the company’s Claude Code, which uses large language models to generate code. Anthropic’s own finance department is using Cowork — a version of Claude Code for non-technical professionals — as it prepares for an initial public offering as early as this year.
“Our finance team uses this for a lot of our growth projections, so they’re constantly understanding how much capacity we’re using and which customer segments are growing and which we should invest more in,” said Catherine Wu, head of product for Claude Code at Anthropic.
Customisable AI “plug-ins” help automate company-specific software, allowing AI to help with contract reviews, sift job applications, visualise ideas or perform financial analysis and modelling.
When Anthropic first launched these software-focused tools in February, a stock market sell-off dubbed the “SaaS-pocalpyse” indiscriminately hit software businesses.
After the initial sell-off, Anthropic unveiled an updated set of tools designed to work with many of those same companies, offering a brief reprieve to their embattled share prices.
Thomson Reuters launched CoCounsel Legal, its AI tool powered by Claude’s latest model, to perform complex tasks from legal research, document analysis and drafting, using its proprietary content.
Hasker said AI models on their own were not necessarily built for fiduciary work, where “professionals are responsible for outcomes and errors carry real consequences”.
He added that “almost right isn’t good enough, whether in a courtroom, a boardroom or an audit”, adding that grounding systems in domain expertise would become essential in coming years.
Private investors focused on software agree that AI plug-ins can be bolted on to existing systems to improve efficiency and replace some human roles. But they cannot fully replace core enterprise software built on years of accumulated data across a company’s supply chain and operations.
“[AI] plug-ins . . . can’t replace proprietary data, systems of records, regulatory context, network effects [and] distribution . . . that’s my defensibility,” said Jean-Baptiste Brian, co-chief executive of private equity firm Hg Capital, which invests primarily in software businesses.
Others argue that even if AI does not entirely kill enterprise software, slowing growth or marginal hits to revenues will leave Wall Street investors heavily exposed.
“There has been an issue in terms of top-line growth for a long time and that has been amplified by AI,” said Englesson. To prove the public markets wrong, he said, software groups must show that customers are willing to pay for new AI capabilities by upselling and bringing in new customers to show a clear return on investment.

Anthropic’s own finance department is using Cowork — a version of Claude Code for non-technical professionals © Gabby Jones/Bloomberg
In February, Anthropic released a preview of a cyber security tool, triggering a sharp sell-off across the industry, affecting companies ranging from Palo Alto Networks, CrowdStrike, Okta and SentinelOne.
Dan Schiappa, a 20-year veteran of the cyber security industry and president of technology and services at Arctic Wolf, acknowledged that the latest Claude releases were a “vast . . . step function improvement” in terms of cyber security competence.
“It’s a bit of a glimpse into the future,” Schiappa said. “If you’re in that market of code vulnerability finding, like [start-up] Snyk for example, it is meaningful for your business.”
This week, Anthropic previewed a new model, Mythos, which it said had found “thousands of high-severity vulnerabilities” in every major operating system and web browser.
Mythos’ capabilities could “reshape cyber security”, Anthropic said. It is working with CrowdStrike and Palo Alto Networks, as well as several Big Tech groups, to address the vulnerabilities.
However, Schiappa said that cyber security expertise is far broader than simply spotting vulnerabilities in code.
“AI is great at very directed tasks, it’s not quite great at broad deterministic reasoning yet. It can make a recommendation but it’s not sure enough to take action. That’s where humans come in,” he said. “They have . . . done hand-to-hand combat with adversaries. They know it, they have seen it before, intuitively.”
He added: “We are an industry where we cannot take chances. People’s businesses are at stake.”
In financial services, Anthropic has launched a cluster of plug-ins focusing on subdivisions such as investment banking, private equity, equity research and wealth management.
The company has said this would enable AI to perform tasks like reviewing large sets of transaction documents, building competitor analyses, parsing earnings transcripts and updating financial models in real time by extracting financial data and modelling scenarios.
Investors who have used the tools say AI agents like Cowork are good at gathering and parsing unstructured data, and that they are using them for tasks like market research, deal sourcing and screening, and even drafting early-stage investment memos.
However, Hg’s Brian says the technology remains “absolutely crap at judgment”, describing AI tools as “sycophantic”, or agreeing too readily with a user’s hypothesis. “It can’t do investing right now,” he said.
Sam Jones, chief operating officer of Compare the Market, which helps customers compare prices for insurance and financial products, said the risk of AI supplanting companies like his was “a pretty key question for the industry at the moment.”
But Jones said companies like his are building a secure harness around AI models to ensure its outputs are accurate, transparent and verifiable as customers and regulators expect.
“Plugging [AI] into an insurer, whilst technically possible, today generates pretty catastrophic consumer outcomes,” he said. “It can hallucinate on consent, it can hallucinate on terms and conditions, it can hallucinate on prices.”
Anthropic’s Wu emphasised that AI systems need existing software platforms in order to make work more efficient, not replace traditional players.
“We’re not going to be best in class at collaboration [or] at managing all your sales deals, or all your designs. We would just never have that ability to assume all that functionality within Cowork,” she said. “Being an everything app . . . it feels overly ambitious for us to do.”
Additional reporting by Alexandra Heal and Antoine Gara

The AI rollout is here - and it's messy | FT Working It