Only have a minute? Listen instead
Getting your Trinity Audio player ready...

Sexual predators are using a new tool to exploit children — AI image generators. Users on a dark-web forum shared 3,000 AI-generated images of child sexual abuse in one month, according to the Internet Watch Foundation.

Current child sexual abuse laws are outdated. They don’t account for the dangers AI and emerging technologies pose. Lawmakers must act fast to put legal protections in place.

The national CyberTipline — a reporting system for suspected online child exploitation — received a staggering 32 million reports in 2022, up from 21 million two years prior. That figure is sure to grow with the rise of AI. AI platforms are “trained” on existing visual material. Sources used to create images may include real children’s faces taken from social media, or photographs of real-life exploitation. Advanced AI-generated images are virtually indistinguishable from unaltered photographs. Investigators have found new images of old victims, images of “de-aged” celebrities depicted as children in abuse scenarios, and “nudified” images taken from otherwise benign photos of clothed children. Much of this technology is downloadable, so offenders can generate images off-line without fear of discovery.

Using AI to create pictures of child sex abuse is not a victimless crime. Behind every AI image, there are real children. And studies show that a majority of those who possess child sex abuse material also commit hands-on abuse. Adults can also use platforms like ChatGPT to better lure children. Criminals have long used fake online identities to meet young people in games or on social media, gain their trust and manipulate them into sending explicit images, then “sextort” them for money, more pictures, or physical acts. ChatGPT makes it shockingly easy to masquerade as a child or teen with youthful language.

ChatGPT chat bot screen seen on smartphone and laptop display with Chat GPT login screen on the background. (Adobe Stock)

President Biden recently signed an executive order geared at managing the risks of AI. But we need help from lawmakers.

We need to update the federal definition of child sexual abuse material to include AI-generated depictions. As the law stands, prosecutors must show harm to an actual child. A defense team could claim that AI material is not depicting a real child, even though AI images often pull from material that victimizes real children.

We must adopt policies requiring tech companies to monitor and report exploitative material. Only three companies were responsible for 98% of all CyberTips in 2020 and 2021: Facebook, Google and Snapchat. Many state laws identify “mandatory reporters,” or professionals who are legally required to report suspected abuse. Employees of social media and tech companies ought to have mandated reporting responsibilities.

We need to rethink how we use end-to-end encryption, in which only the sender and receiver can access a message or file. While it has valid applications, end-to-end encryption can help people store and share child abuse images. To illustrate how many abusers go undetected, consider that out of the 29 million tips the CyberTipline received in 2021, just 160 came from Apple, which maintains end-to-end encryption for iMessages and iCloud.

Even if law enforcement has a warrant to access a perpetrator’s files, a tech company with end-to-end encryption can claim that it can’t help. Surely an industry built on innovation is capable of developing solutions to protect our children.

AI and social media are evolving every day. If lawmakers act now, we can prevent wide-scale harm to kids.


Teresa Huizar is CEO of the National Children’s Alliance network of care centers for child abuse victims, based in Washington, D.C.

Teresa Huizar