AI is no longer just a support tool for banks, it is quickly turning into something they may need to defend themselves against. In the US, some of the biggest financial institutions are now quietly experimenting with a powerful new AI system that can behave like a cyber attacker. The push is not coming from within the banking sector alone, according to Bloomberg. Officials linked to Donald Trump’s administration are urging lenders to take this technology seriously and test how it could expose weaknesses in their own systems before someone else does.
At the centre of this development is Mythos, a newly introduced AI model developed by Anthropic. Unlike traditional cybersecurity tools, Mythos is designed to think more like an attacker. It can scan systems, identify hidden vulnerabilities, and even figure out ways those weaknesses might be exploited. That dual capability — defensive as well as offensive — is what has caught the attention of regulators.
So far, access to Mythos has been tightly controlled. JPMorgan Chase is among the early institutions that have begun working with the model. However, it is not alone for long. Other major players such as Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are also testing or preparing to test the system internally, according to people familiar with the matter.
The urgency around this move became clearer after a high-level meeting held in Washington earlier this month. Senior executives from top Wall Street firms were called in on short notice by US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell. During the discussion, officials did not point to a specific ongoing threat. Instead, they delivered a warning that banks should start stress-testing their systems against advanced AI capabilities before those tools become widely available.
This change in thinking reflects a growing concern among policymakers. Cybersecurity risks have always been a part of banking, but AI is changing the scale and speed at which attacks could happen. A system like Mythos does not just find a single loophole, it can connect multiple small weaknesses and turn them into a serious breach. This kind of “chained vulnerability” approach has historically been difficult even for skilled human hackers.
Anthropic has highlighted some of these risks through its own internal testing. In one instance, the company found that the AI could identify ways to break into web browsers and allow a malicious website to access data from another site, potentially including sensitive financial information. In another case, the model was able to discover and combine multiple vulnerabilities on its own, a process that usually requires time, expertise and coordination when done manually.
To manage these risks, the rollout of Mythos has been deliberately limited. Only a small group of organisations have been given early access under a programme known as “Project Glasswing.” Alongside banks, companies like Amazon and Apple are also part of this initiative. The goal is to secure critical systems and understand the model’s behaviour before similar technologies become more widely accessible.
Government officials have also suggested that this is a matter of urgency. Kevin Hassett recently said, “It was appropriate that Secretary Bessent do what he did.” He added, “We’re taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out.”


