The most consequential event in Indian banking last week was something that did not happen. While Finance Minister Nirmala Sitharaman was convening bank chiefs in New Delhi to assess the cybersecurity implications of an advanced AI model called Mythos, the country’s payment rails carried more than seven hundred million transactions a day without faltering.
That silence, in a noisy week, is the better starting point for the conversation we now need to have. The institutions that endure the period ahead will be the ones that learn to read quiet days as data, not as the absence of events.
WHAT EXACTLY HAPPENED IN THE MEETING?
Let me first say plainly what the Mythos meeting on Thursday did and did not do. It convened. Sitting alongside the Finance Minister were the IT Minister, the RBI, NPCI, CERT-In, and the chiefs of the scheduled commercial banks.
The directions were procedurally correct. Banks were asked to take pre-emptive measures. A real-time threat-intelligence sharing mechanism with CERT-In was advised. The Indian Banks’ Association (IBA) was tasked with developing a coordinated institutional response.
The RBI and the Finance Ministry are now studying the extent of the risk to the domestic financial sector.
What the meeting did not do, and what its loudest commentators have been less careful about, was claim that an attack had occurred, that Indian banks had been breached, or that a specific exploit was underway.
The Finance Minister’s own framing was deliberate. Mythos, she said, is something about which very little is known, and that not many have tested or tried. She also reminded the room, fairly, that Indian banks have remained largely free of major cyber incidents over the decades through sustained digitisation, regular system upgrades, and disciplined customer-safety work.
A NEW KIND OF RISK
That posture, of seriousness without alarm, is itself a piece of expertise. It also leaves the harder question open. What banking has long understood as risk, in the formal sense, is the calculation of probabilities over a defined set of events. Defaults. Frauds. Breaches with known signatures.
The economist Frank Knight drew the distinction, more than a century ago, between this kind of risk and a different beast entirely, which he called uncertainty. Uncertainty is what we face when the event space itself is not yet known.
The Mythos conversation, stripped of its theatrics, is a conversation about the migration of certain financial-sector exposures from the first category into the second.
WHAT MYTHOS CAN DO
So what, exactly, is Mythos? The cleanest answer is the one Anthropic itself offered on April 7. Mythos Preview is a frontier general-purpose language model that, in the course of testing, demonstrated unusually strong capabilities at finding and exploiting vulnerabilities in software.
The company chose not to release it publicly, restricting access instead to a controlled consortium of about forty organisations under an initiative called Project Glasswing. Independent evaluation by the UK AI Security Institute found that the model could autonomously execute multi-stage attacks against vulnerable networks under controlled conditions, while noting that those test environments lacked the active defenders and detection tooling found in well-protected production systems.
The model was credited with surfacing a 27-year-old vulnerability in OpenBSD and with helping Mozilla identify and patch 271 vulnerabilities in Firefox.
Within days of the announcement, a small group obtained .
HOW DANGEROUS IS MYTHOS IN PRATICE?
The honest answer is that we do not yet know. What we can say with reasonable confidence is that Mythos, and the class of capability it represents, do three things at once. They accelerate the discovery of vulnerabilities. They lower the technical bar for exploitation. They shorten the window between a flaw existing in widely deployed software and a flaw being weaponised against it.
Whether that translates into catastrophic incidents or absorbable noise depends almost entirely on the readiness of defenders.
Capability is not the same as outcome. Where the outcome distribution settles is still being shaped, not determined, and the institutions that move first to adapt their posture will disproportionately determine where it lands.
The unauthorised access incident, almost a footnote in most coverage, is the part of the story that should hold our attention. It is not the worst-case capability claim. It is the speed at which containment failed. Even when one of the world’s most safety-conscious frontier AI laboratories restricts a model to a few dozen trusted partners, the population of people in practical proximity to it numbers in the thousands. It only takes one.
Charles Perrow, writing about nuclear and aerospace systems in the 1980s, observed that systems combining tight coupling with complex interactions tend to fail in ways their designers cannot anticipate. He called these normal accidents, by which he meant not normal in the sense of routine but normal in the sense of structurally inherent.
The frontier AI ecosystem, with its concentrated developers, distributed access, sprawling vendor environments, and rapid iteration, is now both tightly coupled and interactively complex. The vocabulary of normal accidents is going to become useful here.
Across financial systems globally, advanced computational capabilities are beginning to perturb the assumption that has quietly held the industry together for decades, namely that vulnerabilities are scarce, attackers are constrained, and defenders have time to react. Each of those three assumptions is now under pressure.
The conversation underway in New Delhi has parallels in Washington, where US Treasury and Federal Reserve convened bank CEOs in early April on similar lines, in Frankfurt, where the Digital Operational Resilience Act has begun to bind in earnest, in Singapore where the Monetary Authority’s technology risk guidance has been progressively tightened, and in London where the Prudential Regulation Authority’s operational resilience regime has now had several years to mature.
The international convergence is not coincidental. It reflects a shared, if quietly held, view among supervisors that the next class of operational risk will not respect national perimeters and will not arrive in the shape that the last class did.
The G7’s Fundamental Elements for Cybersecurity in the Financial Sector, now nearly a decade old, anticipated almost exactly this moment.
INDIA’S DIGITAL EDGE ALSO ITS WEAKNESS
India arrives at this question with assets that few other countries possess and exposures that few other countries possess either. Over the last decade, the country has built the most ambitious digital financial infrastructure of any economy at its income level. UPI alone processed 22.64 billion transactions in March, a record.
The Aadhaar-based authentication stack, the NPCI rails, the maturing account aggregator framework administered by Sahamati, the embedding of digital identity into nearly every consumer-facing financial product, and the early architecture of the Open Network for Digital Commerce have produced a level of interconnection that is, in any honest assessment, unprecedented at this scale.
The same property that makes Indian finance unusually efficient also makes it unusually exposed to a class of risk that does not behave like the risks current frameworks were designed for.
NPCI, in particular, now sits in the country’s payments topology in something close to the position of a central counterparty, a single point through which the systemic load passes. The phrase “too big to fail” has, in the Indian context, quietly acquired a new referent.
AI CHALLENGE FOR INDIAN BANKS
For Indian banks, the implications are specific. Attack economics are shifting: the cost of finding a serious flaw in widely used software is collapsing, which widens the credible adversary set well beyond the well-resourced state and quasi-state actors that currently dominate threat models.
The interdependence problem deepens: Indian financial institutions sit on shared cloud infrastructure, shared core banking platforms, shared identity rails, and a shared payments backbone. A vulnerability that crosses layers is a different category of problem from a vulnerability that lives in one.
Attribution becomes harder, because the whole point of advanced computational tooling is that it generalises. A breach pattern observed on one continent may not look the same when it surfaces here, which makes the coordinated institutional mechanism the Minister has called for genuinely important rather than merely procedural.
And the supply chain widens with every additional vendor relationship, every additional API, every additional managed service.
The 2018 Cosmos Bank heist in Pune remains an instructive case study, not because it was unique but because it was archetypal. Attackers compromised the bank’s ATM switch and built a proxy that authorised withdrawals without ever consulting the core banking system, then used SWIFT to wire funds abroad.
The bank lost roughly ninety-four crore rupees over a long weekend.
The lesson was not that Cosmos was negligent. The lesson was that an attacker who understands the architecture better than its operators do can, given a narrow window, make the system act against itself. Mythos-class capabilities make that window structurally easier to find.
COULD MYTHOS SPARK GEOPOLITICAL RISKS?
There is a geopolitical question here that deserves a direct answer. Could a capability of this kind become a tool of destabilisation in the hands of some governments?
Structurally, yes. But not in the way the action-film imagination suggests.
The relevant scenario is not a single dramatic strike on a financial system. It is something more patient and more difficult to refute. Imagine a sustained sequence of small, ambiguous, statistically distributed disturbances across the payment systems of a target economy.
Reconciliation breaks that no one can quite explain. Transactions that almost did not clear. Anomalies that resolve themselves before they are reported but accumulate in the tail. The objective of strategic competition through such means is not to break a system. It is to make the system feel to its own participants, and to its citizens, like something they can no longer fully trust.
Trust, once lost in financial infrastructure, is expensive and slow to rebuild. That is why the relevant policy frame is not threat, but dependence. The substrate beneath India’s admirable indigenous public digital infrastructure, namely the operating systems, the cloud platforms, the model weights and inference stacks, remains overwhelmingly imported.
The argument for accelerated indigenous AI safety research, sovereign cloud capability, well-resourced cybersecurity expertise inside the public sector, and continued investment in the country’s computational footing is not a nationalistic argument. It is a resilience argument.
MYTHOS A DIGITAL WEAPON OF MASS DESTRUCTION?
It is worth pausing here on a phrase that has begun to circulate in the looser commentary, which is the description of Mythos as a “digital weapon of mass destruction.” That framing should be retired, calmly and firmly.
Mass destruction, in its received meaning, is acute, instantaneous, catastrophic, and attributable. What advanced computational capabilities enable is closer to the opposite.
The risk is chronic rather than acute, distributed rather than concentrated, ambiguous rather than attributable. The destructive analogy, far from raising the alarm, actually understates the danger by mischaracterising it.
The honest comparison, if a comparison is necessary, is to environmental risk: the slow degradation of a substrate that everyone depends on but no one is responsible for, accumulating until something previously unthinkable becomes routine
The disciplines that govern this kind of risk look less like deterrence and more like environmental governance. Anyone arguing that India needs a “cyber NATO” for AI is reaching for the wrong analogy. What we need is closer to a Montreal Protocol for the architecture of digital trust.
The smaller and harder problem sits at the rogue actor’s end. The asymmetry between attacker and defender has always been awkward in cyber. A bank must defend every door, every day. An attacker needs only one door, once.
Capabilities that compress the time and skill required to find that one door change the asymmetry in directions that matter for India in particular, where retail cyber fraud is already a measurable drag on financial inclusion.
The vectors worth taking seriously are unglamorous.
Synthetic identity attacks at scale that exploit eKYC infrastructure are not designed to detect adversarial generative content. Spear-phishing campaigns that learn, across thousands of iterations, to mimic an institution’s internal vocabulary and then turn that vocabulary on its own staff. Account takeover at machine speed, where the fraud is committed, and the funds are layered through three jurisdictions before the typical detection window has closed. Mule-network coordination that adapts faster than the regulators monitoring it. None of this is cinematic.
All of it is within reach of a small group with sufficient skill, access, and patience.
The institutions that will weather this period best are the ones that take the rogue actor case as seriously as the headline-grabbing state actor case. In fraud taxonomy as in epidemiology, the long tail is where the volume lives.
The deeper point, and the one I believe the Mythos meeting was implicitly reaching for, is that risk itself is changing form. For most of modern banking history, financial risk has been thought of as events. A default. A run. A specific attack with a specific signature.
The discipline of risk management has been the discipline of preparing for known shapes of trouble. What we are entering is a period in which risk arrives less as an event and more as a property of the system. The exposure is not the breach. The exposure is the design.
A regulatory and institutional architecture trained on events will struggle against an environment that produces conditions. The discipline that comes next will look less like underwriting and more like the work of high-reliability organisations, the hospitals running intensive care units, the controllers managing dense airspace, the operators running nuclear plants.
Institutions, in other words, that have learned to function well in the presence of unknowns. Operational resilience, the post-2018 regulatory shift championed by the Bank for International Settlements and the Bank of England, is the formal recognition that beyond a certain level of complexity the goal is no longer to prevent failure. It is to make failure survivable.
HOW CAN BANKS PROTECT THEMSELVES?
Which brings us, finally, to the most important question the Finance Minister’s meeting raised, namely what banks and financial institutions can actually do to protect themselves and their customers.
The four directions advised on Thursday, real-time threat intelligence sharing, IBA-led coordination, expert engagement, and customer protection, are necessary. They are not yet sufficient.
To them, I would respectfully add four more, drawn from how operationally resilient institutions actually behave under stress.
First, observable systems, where engineers and supervisors can see how components behave before symptoms reach the customer, and where the institution maintains a current software bill of materials granular enough to know, on any given Tuesday, exactly which open source library is exposed to a newly disclosed flaw.
Second, designed redundancy of the kind that survives correlated failures, not only independent ones, in recognition that in a tightly coupled stack like India’s the failures most worth planning for are the ones that arrive together. Common-mode failure, the simultaneous breakage of components that share a hidden dependency, is the scenario the textbooks underweight and the field over-experiences.
Third, AI-aware stress testing, where scenarios explicitly assume adversaries with computational capabilities of the kind now publicly discussed, and where the question is not only whether the technology holds but whether the human decision pipelines around it remain coherent under speed. The most expensive failure mode in the next decade will be machines moving faster than the institutional reflexes that govern them.
Fourth, governance that takes weak signals seriously, because the teams that catch emerging problems are usually the ones empowered to say in plain language that something does not feel right, and to be heard before they can prove it. High-reliability organisations call this preoccupation with failure. The phrase deserves wider currency in Indian banking.
Beneath all of this sits a question of culture.
Many institutions still reward the appearance of certainty over the practice of inquiry. They reward the tidy quarterly review over the awkward mid-cycle escalation. They treat near-misses, when they reach the board at all, as anomalies rather than as the most important data the system produces.
In a Mythos era, that incentive structure is a slow-acting hazard.
The institutions that will adapt are the ones that build, deliberately, an internal culture where surfacing a faint anomaly is rewarded as competence rather than penalised as alarmism. The capital cost is small. The cultural cost is significant. The strategic value is enormous.
A word, finally, on customers. Indian banking has earned, through a long and uneven journey, the trust of hundreds of millions of first-time formal financial users. That trust is not just a commercial asset. It is a public good, perhaps the most quietly transformative public good of the past decade.
The risks the Mythos conversation surfaces will, in time, manifest most visibly at the retail layer, in fraud patterns subtler than the ones current customer education campaigns address, and in synthetic identity attacks existing eKYC infrastructure was not designed to detect at scale.
Building customer-facing resilience, including clearer recourse mechanisms, real-time fraud reporting, and a public communication posture that informs without alarming, is not a soft adjacent concern. It is core to the work, and arguably the place where the Indian financial system has the most to lose if the current moment is mishandled.
I want to be careful, in closing, not to overstate the case. Indian banks remain, by any reasonable measure, well run and well supervised.
The RBI’s Master Direction on Cyber Resilience and Digital Payment Security Controls, the cumulative work of CERT-In and ReBIT, the maturity of the larger private and public sector institutions, and the design rigour that has gone into the country’s payments backbone, are real assets.
The supervisory architecture is more sophisticated today than at any point in the country’s history. The Mythos conversation does not undo any of that. What it changes is the horizon against which those assets must now be measured.
ALL ABOUT TACKLING ‘SURPRISE’ RISKS
The honest framing is therefore not that India faces a new threat called Mythos. It is that India, having built the most ambitious digital financial infrastructure of any country its size, is also entering a period in which the systems we depend on are, in modest and accumulating ways, becoming harder to predict.
The danger is not that a particular capability exists somewhere abroad. The danger is that we continue to design, regulate, and reason about a world we no longer fully inhabit.
The work of the next decade, for those who lead the country’s financial institutions, will be quieter than the work of the last one. It will also be harder, because the part of the discipline that is becoming most important is the part that consists of being unsurprised by surprise.
That is a quieter conclusion than the headlines might prefer. It is also, I suspect, the more useful one for the conversation that the Finance Minister has now started.


