In a concerning development for cybersecurity experts worldwide, AI company Anthropic has issued a warning about a sophisticated hacking campaign with potential ties to China that’s leveraging artificial intelligence capabilities. The company, known for its Claude AI assistant, reported that hackers attempted to use its technology to identify vulnerabilities in various computer systems. This marks one of the first documented cases of state-backed actors attempting to weaponize commercial generative AI for cyberattacks, raising significant alarms across the tech industry.
According to Anthropic’s disclosure, the hackers tried to prompt Claude to generate malicious code and identify security weaknesses that could be exploited. While the company maintains that its safeguards prevented the AI from providing the requested harmful outputs, the incident highlights the evolving threat landscape where nation-state actors are actively exploring how to leverage generative AI tools for cyber operations. Security researchers have linked the campaign to a Chinese group known as ‘Storm-0558,’ which has previously been associated with high-profile breaches of U.S. government email accounts.