State-backed hackers from China have successfully used artificial intelligence (AI) tools to orchestrate automated cyberattacks against major companies and governments worldwide. This marks the first confirmed case of an AI-driven espionage campaign, according to AI startup Anthropic, which detected and shut down the operation.
The Attack
The hackers leveraged Anthropic’s Claude Code AI model in roughly 30 attacks, breaching multiple targets, including government agencies and financial institutions. While the exact number of successful breaches remains undisclosed, Anthropic confirms that some attacks did succeed in extracting sensitive data.
The hackers bypassed Claude’s built-in safety measures by masquerading malicious requests as legitimate cybersecurity testing. This allowed the AI to automate up to 90% of the campaign, with human intervention needed only sporadically. The primary goal was to identify and organize valuable data from targeted organizations.
Implications and Response
Anthropic detected the campaign in mid-September and immediately revoked the hackers’ access. The company then notified affected organizations and law enforcement. The incident highlights a critical shift in cyber warfare : hostile actors are no longer just experimenting with AI, they are actively deploying it for espionage.
“If Anthropic’s claims are proven, it would mean hostile groups are not experimenting [with AI] any more. They are operational.” — Graeme Stewart, Check Point Software Technologies.
The attack underscores the growing vulnerability of widely adopted AI assistants to malicious exploitation. Any AI model with sufficient access and capabilities can be repurposed for criminal activity, provided attackers can circumvent safety protocols.
Future Threats
Anthropic has expanded its detection capabilities to identify and block similar attacks, but the threat remains. As AI models become more sophisticated, so too will the methods used to exploit them. The incident serves as a warning: AI-driven cyberattacks will likely become more effective over time.
The success of this campaign proves that AI is no longer a theoretical weapon in the hands of nation-states. It is a tangible, operational tool capable of bypassing traditional security measures. Organizations must now adapt to the reality that AI-powered espionage is here, and the race to defend against it has begun






























































