AI startup Anthropic has reported that hackers allegedly backed by the Chinese government used its artificial intelligence tools to carry out automated cyberattacks on major companies and government agencies worldwide. The US-based company described the operation as the first documented cyberespionage campaign conducted largely without human involvement.
Anthropic said it is “highly confident” that the attacks, which numbered around 30, were orchestrated by a state-sponsored group in China. The hackers reportedly exploited Anthropic’s Claude Code AI tool to breach targets in the technology, finance, and government sectors, succeeding in a small number of cases. The company did not disclose the specific organizations affected.
The attackers manipulated Claude Code to extract sensitive information and organise it to identify valuable data. Although the AI tool is designed to prevent harmful activity, the hackers bypassed its safeguards by framing their requests as legitimate cybersecurity testing tasks. Anthropic said human intervention was minimal, involved only sporadically, with AI conducting 80 to 90 percent of the campaign.
Graeme Stewart, head of public sector at cybersecurity firm Check Point Software Technologies, said if confirmed, the attacks indicate that “hostile groups are not experimenting [with AI] any more. They are operational.” Stewart added that widely adopted AI systems could potentially be exploited in criminal operations if misused.
Anthropic detected the attacks in mid-September and launched an immediate investigation. Within 10 days, the company had cut off the hackers’ access to Claude and informed affected organizations and law enforcement authorities. The startup also expanded its monitoring capabilities to detect potentially malicious use of its tools.
The company warned that attacks using AI could grow more sophisticated over time. It is developing additional methods to investigate and identify large-scale, distributed attacks like the one reported.
Anthropic’s disclosure raises broader concerns about AI security. The incident suggests that cybercriminals and state actors may increasingly leverage AI to automate complex operations that previously required extensive human coordination. Security experts say vigilance is needed as the technology becomes more widely adopted across industries and sectors.
The case highlights the emerging intersection of artificial intelligence and cybersecurity, as advanced AI tools designed to support legitimate business and research activities are exploited for espionage and criminal purposes. Experts are urging companies and governments to strengthen safeguards and monitoring systems to prevent similar attacks in the future.
This incident marks a significant milestone in cyber threat evolution, demonstrating that AI is no longer just a tool for productivity and innovation, but also a potential vector for highly automated, state-backed cyberattacks.