Anthropic Warns Hackers Are Weaponizing AI for Cyberattacks
AI Misused in Large-Scale Cybercrime
U.S.-based artificial intelligence company Anthropic has revealed that hackers have been misusing its AI chatbot, Claude, to launch advanced cyberattacks. According to the firm, threat actors have exploited its technology to steal massive amounts of personal data, extort victims, and infiltrate organizations.
Anthropic confirmed that its AI was used to generate malicious code, while in another case, North Korean operatives allegedly used Claude to secure fraudulent remote jobs at leading U.S. tech firms.
How Hackers Exploited Claude
The company reported that cybercriminals deployed Claude in what it called “vibe hacking”—an unprecedented case where AI-generated code targeted at least 17 organizations, including government agencies.
Hackers not only leveraged the chatbot to write malware but also used it strategically, deciding which data to steal, how to craft extortion demands, and even suggesting ransom amounts.
The Growing Threat of AI in Cybercrime
Experts warn that AI-driven attacks are accelerating the speed of cyber intrusions. Alina Timofeeva, an adviser on AI and cybercrime, explained that organizations must move from reactive defenses to proactive and preventative security strategies.
Anthropic said it has since disrupted the malicious actors, reported the incidents to authorities, and strengthened its monitoring tools to prevent future misuse.
North Korean Scammers Using AI for Job Fraud
Beyond cyberattacks, Anthropic uncovered that North Korean operatives used Claude to create fake resumes and job applications for remote positions at U.S. Fortune 500 companies. Once hired, the operatives allegedly relied on the AI to translate communications, write code, and bypass cultural or technical barriers.
Cybersecurity experts warn this tactic could put global corporations at risk of unknowingly violating international sanctions by employing North Korean nationals.
AI Is Not Creating New Crimes—Yet
Despite these revelations, analysts say AI is not inventing brand-new types of crime. Instead, it is making existing cyber threats like phishing scams and ransomware attacks faster, smarter, and more scalable.
“AI is essentially a repository of sensitive information,” said Nivedita Murthy, senior security consultant at Black Duck. “Companies need to treat it with the same protection measures as any other confidential data system.”