Generative artificial intelligence is reshaping the dark underworld of child sexual abuse material (CSAM). Recent investigations reveal a steep climb in AI‑crafted images and videos, overwhelming platforms, regulators and child‑safety groups that are already stretched thin.

Reuters reported that actionable reports of AI‑generated CSAM more than doubled over the past two years. The Internet Watch Foundation (IWF) confirmed it identified 8,029 AI‑created images and videos of child sexual abuse in 2025 alone. Those numbers translate into a torrent of content that is harder to detect, verify and remove.

In the United States, the National Center for Missing & Exploited Children (NCMEC) received a staggering 1.5 million AI‑linked CSAM reports in 2025. That figure dwarfs the 67,000 reports logged a year earlier and the 4,700 reported in 2023. The surge reflects not only more content but also the growing sophistication of AI tools that can produce realistic, often indistinguishable, depictions of abuse.

Law‑enforcement officers now face an added layer of complexity. Determining whether a child depicted in an image is a real victim, a digitally altered subject or a wholly fabricated figure consumes valuable time. Each misstep delays action that could protect a child in immediate danger.

A high‑profile Minnesota case underscores the new threat. William Michael Haslach, a school lunch monitor and traffic guard, allegedly used AI applications to strip clothing from photos he had taken of children at work. Federal investigators uncovered more than 90 victims and nearly 800 AI‑generated abuse images on his devices. The case shows how ordinary photographs harvested from social media can become the raw material for illicit AI manipulation.

Beyond static images, offenders are experimenting with other formats. Reports cite manipulated photos of real children, as well as chatbot conversations where perpetrators seek grooming advice or role‑play abusive scenarios. These textual and visual hybrids further confound automated moderation systems, which now have to sift through a flood of false positives and low‑quality tips.

Automated moderation tools, once considered a frontline defense, are being swamped. The sheer volume of AI‑generated content generates “junk tips” that overload task forces already coping with limited resources. Every erroneous flag or missed piece of evidence represents a lost opportunity to intervene in an ongoing abuse situation.

Experts warn that the problem will only intensify as AI models become more accessible and user‑friendly. Without decisive policy action, technical safeguards and coordinated law‑enforcement responses, the internet may see an even larger influx of synthetic abuse material, further eroding the protective net for children.

Cet article a été rédigé avec l'assistance de l'IA.
News Factory SEO vous aide à automatiser le contenu d'actualités pour votre site.