Skip to main content
For ParentsUncategorized

Artificial Intelligence Helps Produce Child Sexual Abuse Material

Predators utilize AI software to generate CSAM or 'child pornography' of fictitious children or superimpose images of real children into already existing sexually explicit material.

By September 1, 2024No Comments

artificial-intelligence-helps-produce-child-sexual-abuse-material

Fight the New Drug collaborates with a variety of qualified organizations and individuals with varying personal beliefs, affiliations, and political persuasions. As FTND is a non-religious and non-legislative organization, the personal beliefs, affiliations, and persuasions of any of our team members or of those we collaborate with do not reflect or impact the mission of Fight the New Drug.

Disclaimer: Fight the New Drug is a non-religious and non-legislative awareness and education organization. Some of the issues discussed in the following article are legislatively-affiliated. Though our organization is non-legislative, we fully support the regulation of already illegal forms of pornography and sexual exploitation, including the fight against sex trafficking.

The contents of this article were originally published by NCOSE. Content from this article has been reshared with permission. 

Written by Victoria Amaya Rodriguez

From the convenience of voice-to-text converters and digital assistants like Siri to the unsettling realm of ‘deepfake’ technology and image generation, artificial intelligence has seamlessly merged into our daily lives, shaping our efficiency while advancing forms of child sexual exploitation. 

Before the widespread accessibility of artificial intelligence (AI), a form of producing child sexual abuse material (CSAM, the more apt term for ‘child pornography’) involved cutting out pictures of children and pasting them onto pornographic images, compositing collages of CSAM. Today, predators can download readily available text-to-image AI software to generate CSAM of fictitious children; as well as AI-manipulated CSAM, where real pictures of any child are digitally superimposed into existing CSAM or other sexually explicit material.  

With emerging technology and the never-ending tools to enhance or edit, AI images have reached a level of sophistication that is virtually indistinguishable from genuine photographs. Given the realistic nature of AI-driven CSAM, efforts in identifying and defending real victims of child sexual abuse are interrupted by law enforcement’s difficulty in distinguishing the exploitive material as real or AI-generated.

Related: Would AI-Generated Nudes Solve the Ethical Problems of Porn Sites?

As reported by the Internet Watch Foundation (IWF), AI CSAM is now realistic enough to be treated as real CSAM, denoting AI-tech as a new route for malicious actors to commercialize and sexually exploit children. For instance, the IWF report found AI CSAM to revictimize real victims of CSAM; providing that predators habitually ‘collect’ material of their preferred victims of CSAM, with ‘deepfake’ technology, they can train AI models and input images of the chosen victim to reproduce explicit content in any portrayal they’d like. The same applies to famous children and youths known by the predator—if a photograph of them is available, any child is susceptible to victimization through AI CSAM. 

It’s happening right now

Last year, the National Center for Missing & Exploited Children (NCMEC)’s CyberTipline, a system for reporting the web-based sexual exploitation of children, received 4,700 reports of AI-generated CSAM, underscoring the immediate and prevalent threats to child safety posed by generative AI. It’s happening right now. 

In Florida, a science teacher faced ‘child pornography’ charges, admitting to using yearbook photos of students from his school to produce CSAM. A few days later, another Florida man was arrested for ‘child pornography,’ after photographing an underage girl in his neighborhood to synthetically create AI-CSAM; a detective commented, “What he does is he takes the face of a child and then he sexualizes that, removes the clothing and poses the child and engages them in certain sexual activity. And that’s the images that he is making with this A.I.” The same holds in Korea, where a 40-year-old man was sentenced to over two years for producing 360 sexually exploitative images with a text-to-image AI program, using commands such as “10 years old,” “nude,” and “child” to generate hundreds of CSAM.  

Related: Deepfake AI: A Newer, Scarier Genre of “Smart” Online Porn

These AI applications have resulted in the generation and dissemination of synthetic sexually explicit material (SSEM), including instances where students generate explicit content involving their underage classmates. In Illinois, a 15-year-old’s photo with her friends before a school dance was digitally manipulated into sexually explicit images and shared amongst her classmates. In another instance, male students in a New Jersey high school compiled images from peers’ social media accounts to non-consensually produce and spread explicit photos of more than 30 of their underage female classmates. In Egypt, a 17-year-old girl committed suicide when a boy threatened to distribute and eventually shared explicit digitally altered images of her, causing severe emotional distress upon its dissemination, enduring people’s vile comments, and worrying that her family believed the images were authentic.  

Get The Facts

These children victimized through lawless ‘deepfake’ technology detail experiencing extreme violation, anxiety, and depression, where their sense of safety, autonomy, and self-worth are profoundly undermined. Why are corporations still allowed to commercialize and profit from their exploitation? 

Despite its exploitative nature, deepfake pornography has gained immense popularity. The 2023 State of Deepfakes Report disclosed 95,820 deepfake videos online, where 98% of all deepfake videos online were pornographic. For instance, DeepNude, one of the many exploitive applications hosted by Microsoft’s GitHub alluding to “See anyone naked,” received 545,162 visits and nearly 100,000 active users before selling for $30,000.  

Related: AI Accelerates Rise of Deepfakes on Mainstream Porn Sites

As one of the corporations named to the National Center on Sexual Exploitation’s 2024 Dirty Dozen List, Microsoft’s GitHub is a leading perpetrator in the commercialization and exacerbation of synthetic sexually explicit material, hosting the nudifying technology that allows perpetrators to generate realistic synthetic CSAM

Artificial Intelligence Technology is Trained on Pre-existing CSAM

Contrary to AI-manipulated CSAM, AI-generated CSAM features fictitious children, assumedly avoiding the exploitation of real children in the production of sexually exploitative material; however, recent revelations found LAISON-5B, a popular large-scale dataset of image-text pairs used to train Stable Diffusion, inadvertently includes CSAM. Namely, the Stanford Digital Repository investigated the degree of CSAM within the dataset and found 3,226 entries of suspected CSAM. Simply put, the thousands of instances of illegal and abusive material in the open-source dataset suggest Stable Diffusion was once trained on CSAM. 

Other issues include the lack of efficiency in safeguards within several text-to-image generative models, where developers’ safety measures, intended to prevent the generation of harmful content, can be easily bypassed with users’ fine-tuning.  

The vulnerabilities in generative AI and the lack of data governance in LAISON-5B necessitate developers’ exhaustive oversight in Artificial Intelligence developments, in addition to the much-needed federal legislation to protect victims of non-consensual deepfake pornography. 

Support this resource

Did you like that article? Help us keep our educational resources free to access! Fight the New Drug is a registered 501(c)(3) nonprofit, which means the educational resources we create are made possible through donations from people like you. Just one dollar can make a difference!

Give $1