US, UK Regulators Sound Alarm on Bank AI Risks

Quick Reads
- US Treasury Secretary Scott Bessent and Fed Chair Jerome Powell convened an urgent meeting with top bank CEOs to warn of cyber risks from Anthropic’s new Mythos AI model.
- The AI model is reportedly capable of identifying and exploiting weaknesses across every major operating system and web browser.
- Separately, the UK government is considering a common testing regime for AI models used by lenders, as regulators worry current bank monitoring is insufficient.
US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell held an urgent meeting with chief executives of America’s largest banks this week to warn of cyber risks posed by Anthropic’s latest artificial intelligence model, according to two sources familiar with the matter. The Treasury-hosted meeting in Washington on Tuesday came just days after Anthropic launched its powerful Mythos model, which the company says is capable of identifying and exploiting weaknesses across “every major operating system and every major web browser.” Meanwhile, across the Atlantic, the UK government is weighing a proposal for a common testing regime for general-purpose AI systems used by British lenders, following concerns raised by the Bank of England.
US warning on Mythos
Anthropic, the startup behind the Claude chatbot, stopped short of a broad release for its new Mythos model, citing concerns it could expose previously unknown cybersecurity vulnerabilities. The company has said it is in ongoing discussions with US government officials about the model’s “offensive and defensive cyber capabilities” and has briefed senior officials and key industry stakeholders ahead of its release. Access to Mythos will be limited to about 40 technology companies, including Microsoft and Google.
The meeting included CEOs from Citigroup, Morgan Stanley, Bank of America, Wells Fargo and Goldman Sachs, according to Bloomberg News. JPMorgan CEO Jamie Dimon was unable to attend. The gathering was aimed at ensuring banks are aware of the risks posed by Mythos and similar models, and are taking steps to defend their systems. The warnings reflect a growing recognition that advanced AI systems could be used by malicious actors to breach financial infrastructure.
UK considers testing regime
In the UK, the proposal for a common AI testing regime was put to the Department for Science, Innovation and Technology last month by Starling Bank chief information officer Harriet Rees, who serves as the government’s financial services AI “champion” and co-chairs the Bank of England’s AI task force. Rees argued that many UK lenders rely on AI models developed in the US, and an independent assessment would provide “comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard
The proposal follows AI meetings held in October by the Bank of England’s Prudential Regulation Authority, where banks were told that AI model monitoring was “not frequent enough.” Currently, there is no legal requirement for AI systems to undergo assessment before being deployed in regulated sectors, though banks carry out their own reviews. Rees suggested the AI Security Institute (AISI) as the “most obvious body” to take on the role, though a government spokesperson signalled that AISI’s remit is focused on research, not third-party testing.
Market Snapshot
- Anthropic AI Model: Mythos (launched April 2026)
- Capability Claim: Can exploit weaknesses across every major operating system and web browser
- Initial Access: ~40 technology companies (Microsoft, Google among them)
- US Meeting Attendees (Reported): CEOs of Citi, Morgan Stanley, Bank of America, Wells Fargo, Goldman Sachs
- UK Proposal: Common testing regime for bank AI models
- Current UK Legal Requirement for AI in Regulated Sectors: None (only voluntary bank reviews)
- Bank of England Concern: AI model monitoring “not frequent enough”






