• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

NGOs.AI

AI in Action

  • Home
  • AI for NGOs
  • Case Stories
  • AI Project Ideas for NGOs
  • Contact
You are here: Home / Category / Governments Move to Curb AI Child Exploitation Content with Tough New Legislation

Governments Move to Curb AI Child Exploitation Content with Tough New Legislation

Dated: November 13, 2025

The UK government has introduced world-leading legislation to prevent the creation of AI-generated child sexual abuse material (CSAM), working closely with the AI industry and child protection organisations to ensure safeguards are built into AI models. The new laws will empower the Technology Secretary and Home Secretary to designate AI developers and organisations such as the Internet Watch Foundation (IWF) as authorised testers, enabling them to examine AI systems and prevent misuse. This action comes as the IWF reports that cases of AI-generated CSAM have more than doubled in the past year, increasing from 199 in 2024 to 426 in 2025, with a sharp rise in depictions of infants.

The legislation aims to close critical safety gaps by allowing experts to test AI models for vulnerabilities before they are released, ensuring they cannot be manipulated to produce indecent images or videos of children. Previously, criminal liability laws made it difficult to conduct such testing, as images could only be removed after being created and circulated online. The new measures, among the first of their kind globally, will also allow authorised organisations to check models for protection against extreme pornography and non-consensual intimate content.

UK officials emphasised that while generating or possessing CSAM—real or AI-generated—is already illegal, advances in AI pose new threats that require stronger preventive measures. Technology Secretary Liz Kendall stressed that technological progress must not outpace child safety, and the new legislation ensures that safeguards are integrated into AI systems from the outset. Minister for Safeguarding Jess Phillips added that this proactive approach will stop legitimate AI tools from being exploited to create abusive content and better protect children from online predators.

The IWF’s latest data shows the severity of AI-generated CSAM is escalating, with Category A content—depicting the most extreme forms of abuse—rising from 2,621 to 3,086 items, now representing 56% of all illegal material compared to 41% the previous year. Girls are overwhelmingly targeted, accounting for 94% of illegal AI images in 2025, and cases involving infants aged 0–2 have surged dramatically.

To support the safe implementation of these measures, the government will form an expert group in AI and child safety to design secure testing safeguards, protect sensitive data, and support the wellbeing of researchers. Introduced as an amendment to the Crime and Policing Bill, this initiative marks a major step toward making the UK the safest country for children online. It reflects the government’s commitment to collaborating with AI developers, tech platforms, and child protection organisations to ensure AI innovation goes hand in hand with public trust and child safety.

Internet Watch Foundation Chief Executive Kerry Smith welcomed the move, calling it a vital step toward ensuring AI products are “safe by design.” She highlighted that AI tools have made it easier for criminals to produce limitless, realistic abuse material, re-victimising survivors and putting children—especially girls—at greater risk. The new law, she said, is essential to ensuring that child safety is built into AI technology before it reaches the public.

Related Posts

  • Photo Child monitoring
    AI for Child Safety on Digital Platforms
  • Leveraging AI to Prevent Child Labor Globally
  • AI-Powered Child Welfare Monitoring Systems
  • The Role of AI in Combating Child Labor Globally
  • Photo Child monitoring
    AI in Protecting and Promoting Child Rights Across the Globe

Primary Sidebar

Scenario Planning for NGOs Using AI Models

AI for Cleaning and Validating Monitoring Data

AI Localization Challenges and Solutions

Mongolia’s AI Readiness Explored in UNDP’s “The Next Great Divergence” Report

Key Lessons NGOs Learned from AI Adoption This Year

Photo AI, Administrative Work, NGOs

How AI Can Reduce Administrative Work in NGOs

Photo Inclusion-Focused NGOs

AI for Gender, Youth, and Inclusion-Focused NGOs

Photo ROI of AI Investments

Measuring the ROI of AI Investments in NGOs

Entries open for AI Ready Asean Youth Challenge

Photo AI Trends

AI Trends NGOs Should Prepare for in the Next 5 Years

Using AI to Develop Logframes and Theories of Change

Managing Change When Introducing AI in NGO Operations

Hidden Costs of AI Tools NGOs Should Know About

Photo Inclusion-Focused NGOs

How NGOs Can Use AI Form Builders Effectively

Is AI Only for Large NGOs? The Reality for Grassroots Organizations

Photo AI Ethics

AI Ethics in Advocacy and Public Messaging

AI in Education: 193 Innovative Solutions Transforming Latin America and the Caribbean

Photo Smartphone app

The First 90 Days of AI Adoption in an NGO: A Practical Roadmap

Photo AI Tools

AI Tools That Help NGOs Identify High-Potential Donors

Photo AI-Driven Fundraising

Risks and Limitations of AI-Driven Fundraising

Data Privacy and AI Compliance for NGOs

Apply Now: The Next Seed Tech Challenge for AI and Data Startup (Morocco)

Photo AI Analyzes Donor Priorities

How AI Analyzes Donor Priorities and Funding Trends

Ethical Red Lines NGOs Should Not Cross with AI

AI for Faith-Based and Community Organizations

© NGOs.AI. All rights reserved.

Grants Management And Research Pte. Ltd., 21 Merchant Road #04-01 Singapore 058267

Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}