Skip to main content

Guidance

Understanding and responding to AI-generated child sexual abuse material

The rapid evolution of Artificial Intelligence (AI) brings numerous benefits, but also poses significant risks, particularly concerning the creation and sharing of child sexual abuse material (CSAM), also referred to as AI-CSAM.

Our comprehensive guide, developed in collaboration with the Internet Watch Foundation (IWF) provides essential information for all professionals working with children and young people.  

Why are these guides crucial for professionals? 

  • Stay Informed: Learn about the various ways AI can be misused to create child sexual abuse imagery, including the manipulation of existing images or the generation of entirely new content.
  • Understand the Law: Gain clarity on the legal implications of AI-child sexual abuse material, which is considered illegal even if it's not photorealistic.
  • Effective Response: Access clear guidance on how to respond to incidents involving AI-CSAM, treating them with the same urgency and care as any other child sexual abuse issue. This includes reporting procedures and crucial "dos and don'ts" when encountering such material.
  • Support for Victims: Learn about providing wellbeing support to victims, who may experience significant emotional and psychological impact.

 

Social media pack

To support the launch of this vital new resource on AI-generated child sexual abuse material, we've created a social media pack featuring suggested messaging and supporting visuals for use online and in newsletters - helping you raise awareness and share guidance with your networks.

DOWNLOAD SOCIAL MEDIA PACK

 

National Crime Agency logo, CEOP Education logo, IWF logo