The UK government has introduced world-leading legislation to prevent the creation of AI-generated child sexual abuse material (CSAM), working closely with the AI industry and child protection organisations to ensure safeguards are built into AI models. The new laws will empower the Technology Secretary and Home Secretary to designate AI developers and organisations such as the Internet Watch Foundation (IWF) as authorised testers, enabling them to examine AI systems and prevent misuse. This action comes as the IWF reports that cases of AI-generated CSAM have more than doubled in the past year, increasing from 199 in 2024 to 426 in 2025, with a sharp rise in depictions of infants.
The legislation aims to close critical safety gaps by allowing experts to test AI models for vulnerabilities before they are released, ensuring they cannot be manipulated to produce indecent images or videos of children. Previously, criminal liability laws made it difficult to conduct such testing, as images could only be removed after being created and circulated online. The new measures, among the first of their kind globally, will also allow authorised organisations to check models for protection against extreme pornography and non-consensual intimate content.
UK officials emphasised that while generating or possessing CSAM—real or AI-generated—is already illegal, advances in AI pose new threats that require stronger preventive measures. Technology Secretary Liz Kendall stressed that technological progress must not outpace child safety, and the new legislation ensures that safeguards are integrated into AI systems from the outset. Minister for Safeguarding Jess Phillips added that this proactive approach will stop legitimate AI tools from being exploited to create abusive content and better protect children from online predators.
The IWF’s latest data shows the severity of AI-generated CSAM is escalating, with Category A content—depicting the most extreme forms of abuse—rising from 2,621 to 3,086 items, now representing 56% of all illegal material compared to 41% the previous year. Girls are overwhelmingly targeted, accounting for 94% of illegal AI images in 2025, and cases involving infants aged 0–2 have surged dramatically.
To support the safe implementation of these measures, the government will form an expert group in AI and child safety to design secure testing safeguards, protect sensitive data, and support the wellbeing of researchers. Introduced as an amendment to the Crime and Policing Bill, this initiative marks a major step toward making the UK the safest country for children online. It reflects the government’s commitment to collaborating with AI developers, tech platforms, and child protection organisations to ensure AI innovation goes hand in hand with public trust and child safety.
Internet Watch Foundation Chief Executive Kerry Smith welcomed the move, calling it a vital step toward ensuring AI products are “safe by design.” She highlighted that AI tools have made it easier for criminals to produce limitless, realistic abuse material, re-victimising survivors and putting children—especially girls—at greater risk. The new law, she said, is essential to ensuring that child safety is built into AI technology before it reaches the public.






