Anthropic’s “Mythos” AI Model Sparks Security Alarm as Governments and Banks Open Talks

Web Reporter
3 Min Read

Anthropic has warned that its newest artificial intelligence system, known as Mythos, is too powerful for public release, citing what it describes as unprecedented cybersecurity risks. The announcement has triggered urgent discussions with governments and financial regulators in the United States and the United Kingdom, as concerns grow over how advanced AI systems could be misused.

The company confirmed it is engaging directly with US authorities about Mythos, with executives stressing that national security considerations are central to its approach. Anthropic co-founder said the firm is actively coordinating with officials, noting that future models will also be subject to similar scrutiny. The remarks were made during a technology and policy event in Washington, where he acknowledged ongoing tensions following a recent US government designation labelling Anthropic a supply-chain risk.

That designation came after negotiations broke down over limits on how the US Department of Defense could use the company’s AI tools. Despite the dispute, Anthropic said it remains committed to working with national security agencies.

Financial institutions have also become part of the conversation. US Treasury Secretary Scott Bessent recently convened senior banking executives in Washington to discuss the implications of Mythos. According to reports, banks were encouraged to explore how the system might help identify cybersecurity vulnerabilities across financial networks.

Anthropic has said access to the new model will be tightly restricted. Rather than a broad public release, Mythos will be shared only with selected technology and cybersecurity partners, including Amazon, Apple, and JPMorgan Chase. Major banks such as Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are also understood to be testing the system under controlled conditions.

The UK government’s AI Security Institute has issued its own assessment, describing Mythos as a significant step up in capability compared with previous models, particularly in its potential to be used for cyber-related threats. British financial regulators are also reported to be examining the risks associated with the system.

Industry experts say the situation reflects a broader shift in how frontier AI is being handled, with companies increasingly treating their most advanced models as sensitive technologies requiring oversight similar to that applied to critical infrastructure or defence systems.

The debate surrounding Mythos highlights growing pressure on AI developers to balance innovation with security concerns as governments and financial systems prepare for more powerful and potentially unpredictable artificial intelligence tools.

TAGGED:
Share This Article