North Korean cyber groups are increasingly using artificial intelligence tools to scale and accelerate efforts to place fake remote workers inside global companies, according to a report from Microsoft Threat Intelligence.
The researchers said AI is being used across the attack lifecycle by North Korean groups tracked as Coral Sleet, Sapphire Sleet and Jasper Sleet. These groups are leveraging AI to create convincing digital personas, conduct reconnaissance on targets, generate malicious content and maintain access once inside corporate systems.
The campaign builds on a long-running North Korean strategy of placing IT workers at foreign companies to generate revenue and potentially gain access to sensitive systems.
According to the report, generative AI tools allow operatives to quickly build fake professional profiles tailored to specific job markets. For example, Jasper Sleet has been observed using AI to analyze job listings on platforms such as Upwork in order to identify in-demand skills and align fabricated identities with targeted roles.
Researchers said AI also helps attackers produce convincing social engineering materials, including emails and messages that mimic internal communications in multiple languages. AI-generated media and real-time voice modulation are also being used to strengthen impersonation attempts.
In some cases, operatives used the AI tool Faceswap to insert North Korean workers’ faces into stolen identity documents, occasionally reusing the same AI-generated images across multiple personas.
Once a fake worker secures a job, AI tools are used to maintain the deception. Microsoft said operatives prompt AI systems to draft professional communications, answer technical questions or generate code snippets to meet performance expectations in unfamiliar technical environments.
AI is also being used after initial compromise to speed up analysis of the victim’s systems, identify opportunities for lateral movement and blend malicious activity with legitimate network behavior.
The report noted that North Korean actors are increasingly using AI to escalate privileges, locate sensitive records and bypass security controls while minimizing the risk of detection.
While most observed activity currently involves generative AI, Microsoft said threat actors are beginning to experiment with more advanced “agentic AI” systems that could automate elements of cyber operations.
Researchers said such systems could eventually support semi-autonomous workflows capable of refining phishing campaigns, testing infrastructure and identifying new targets. However, Microsoft said it has not yet seen large-scale use of agentic AI by cybercriminal groups due to reliability and operational limitations.
The findings highlight growing concerns among cybersecurity experts that AI tools are lowering the barriers for sophisticated cyber operations while increasing the speed and scale of attacks.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



