
A UK watchdog linked deepfake nudes to extortion schemes, revealing predators use AI-generated images to blackmail minors, while a U.S. report found 85 nudify sites earning $36 million annually, often operating with support from major tech infrastructure providers
The rising abuse of artificial intelligence in online exploitation has come under sharp scrutiny after the tragic suicide of a 16-year-old boy in Kentucky, who was reportedly blackmailed with an AI-generated explicit image. The incident has highlighted a growing global threat known as AI-powered sextortion, where cybercriminals use fake sexualised images to extort minors.
According to media reports, the teen, Elijah Heacock, received threatening messages demanding $3,000 or a digitally manipulated nude image—believed to have been generated through AI—would be shared with his contacts. His parents, devastated by the loss, revealed the blackmail only came to light after his death.
“These people are highly organised and don’t need real photos anymore. They use AI to fabricate convincing images and weaponise them against vulnerable kids,” said John Burnett, Elijah’s father, in an interview with CBS News.
Law enforcement agencies in the U.S. are investigating the case amid a broader surge in similar incidents. The FBI has warned of a "horrific rise" in sextortion cases, mostly targeting boys aged 14 to 17, with several cases ending in suicide.
So-called “nudify” apps—AI tools designed to digitally strip clothing from images—have emerged as the latest tool in the hands of predators. Initially aimed at creating fake images of celebrities, these platforms are increasingly being misused against minors and peers, especially in schools.
AI sextortion fuels lucrative industry
A recent report from the UK-based Internet Watch Foundation linked deepfake nudes to financial extortion, warning that predators no longer need real explicit material to commit abuse. Some online guides found by the watchdog even encouraged perpetrators to use nudify tools to create fake images of children for blackmail.
The commercial side of this abuse is also concerning. An investigation by Indicator, a U.S. publication tracking digital threats, found 85 nudify websites collectively generating up to $36 million annually. Despite regulatory actions, many of these sites remain active, supported by major infrastructure providers like Google, Amazon, and Cloudflare.
International responses are slowly taking shape. Spain and the UK have criminalised the creation and distribution of non-consensual deepfake pornography, while the U.S. recently passed the “Take It Down Act” to tackle the issue. Meta has also filed lawsuits against companies promoting AI nudification tools.
Still, experts warn the problem is far from over. “It’s a constant battle,” said a report by Indicator, “and the adversaries are persistent, evolving faster than the protections against them.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.