• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

NGOs.AI

AI in Action

  • Home
  • AI for NGOs
  • Case Stories
  • AI Project Ideas for NGOs
  • Contact
You are here: Home / Category / Governments Move to Curb AI Child Exploitation Content with Tough New Legislation

Governments Move to Curb AI Child Exploitation Content with Tough New Legislation

Dated: November 13, 2025

The UK government has introduced world-leading legislation to prevent the creation of AI-generated child sexual abuse material (CSAM), working closely with the AI industry and child protection organisations to ensure safeguards are built into AI models. The new laws will empower the Technology Secretary and Home Secretary to designate AI developers and organisations such as the Internet Watch Foundation (IWF) as authorised testers, enabling them to examine AI systems and prevent misuse. This action comes as the IWF reports that cases of AI-generated CSAM have more than doubled in the past year, increasing from 199 in 2024 to 426 in 2025, with a sharp rise in depictions of infants.

The legislation aims to close critical safety gaps by allowing experts to test AI models for vulnerabilities before they are released, ensuring they cannot be manipulated to produce indecent images or videos of children. Previously, criminal liability laws made it difficult to conduct such testing, as images could only be removed after being created and circulated online. The new measures, among the first of their kind globally, will also allow authorised organisations to check models for protection against extreme pornography and non-consensual intimate content.

UK officials emphasised that while generating or possessing CSAM—real or AI-generated—is already illegal, advances in AI pose new threats that require stronger preventive measures. Technology Secretary Liz Kendall stressed that technological progress must not outpace child safety, and the new legislation ensures that safeguards are integrated into AI systems from the outset. Minister for Safeguarding Jess Phillips added that this proactive approach will stop legitimate AI tools from being exploited to create abusive content and better protect children from online predators.

The IWF’s latest data shows the severity of AI-generated CSAM is escalating, with Category A content—depicting the most extreme forms of abuse—rising from 2,621 to 3,086 items, now representing 56% of all illegal material compared to 41% the previous year. Girls are overwhelmingly targeted, accounting for 94% of illegal AI images in 2025, and cases involving infants aged 0–2 have surged dramatically.

To support the safe implementation of these measures, the government will form an expert group in AI and child safety to design secure testing safeguards, protect sensitive data, and support the wellbeing of researchers. Introduced as an amendment to the Crime and Policing Bill, this initiative marks a major step toward making the UK the safest country for children online. It reflects the government’s commitment to collaborating with AI developers, tech platforms, and child protection organisations to ensure AI innovation goes hand in hand with public trust and child safety.

Internet Watch Foundation Chief Executive Kerry Smith welcomed the move, calling it a vital step toward ensuring AI products are “safe by design.” She highlighted that AI tools have made it easier for criminals to produce limitless, realistic abuse material, re-victimising survivors and putting children—especially girls—at greater risk. The new law, she said, is essential to ensuring that child safety is built into AI technology before it reaches the public.

Related Posts

  • Photo Child monitoring
    AI for Child Safety on Digital Platforms
  • Leveraging AI to Prevent Child Labor Globally
  • AI-Powered Child Welfare Monitoring Systems
  • The Role of AI in Combating Child Labor Globally
  • Photo Child monitoring
    AI in Protecting and Promoting Child Rights Across the Globe

Primary Sidebar

From Organic Farming to AI Innovation: UN Summit Showcases Global South Solutions

Asia-Pacific’s AI Moment: Who Leads and Who Lags Behind?

Africa’s Digital Future: UAE Launches $1 Billion AI Infrastructure Initiative

Surge in Digital Violence Against Women Fueled by AI and Anonymity

Africa Launches New Blueprint to Build the Next Generation of AI Talent

UN Warns Healthcare Sector to Adopt Legal Protections for AI

How Community-Driven AI Is Shaping the Future of Humanitarian Communication

Rockefeller Foundation, Cassava Technologies Boost AI Computing for NGOs in Africa

AI-Related Risks: ILO Urges HR Managers to Boost Awareness and Skills

Africa’s Public Data Infrastructure: Key to Unlocking the AI Future

Infosys Introduces AI-First GCC Framework to Power Next-Gen Innovation Centers

Ghana Advances Development Goals Through Intelligent De-Risking of Private Sector Finance

The Environmental Cost of AI and How the World Can Respond

Governments Move to Curb AI Child Exploitation Content with Tough New Legislation

Empowering the Future: New Commitments in AI and Education

Implementing and Scaling AI Solutions: Best Practices for Safe and Effective Adoption

Learning from Global Leaders in AI for Health and Care Innovation

New ‘AI Readiness Project’ by Rockefeller Foundation and Center for Civic Futures Aims to Build State Capacity for Ethical AI

Nonprofit Tech for Good’s Free Webinar on “AI-Proofing” Careers

Greater New Orleans Foundation Workshop Teaches Nonprofit Leaders How to Build Capacity Using AI

How AI Can Reduce the Time Spent on Finding Grants by 80%

What type of AI Projects can NGOs implement in their Communities?

How Artificial Intelligence Helps NGOs Protect and Promote Human Rights

Step‑by‑Step Guide: How NGOs Can Use AI to Win Grants

Democracy by Design: How AI is Transforming NGOs’ Role in Governance, Participation, and Fundraising

© NGOs.AI. All rights reserved.

Grants Management And Research Pte. Ltd., 21 Merchant Road #04-01 Singapore 058267

Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}