UN agencies are warning that artificial intelligence is rapidly amplifying threats to children, with new technologies being exploited for online grooming, deepfakes, cyberbullying and the spread of harmful content. According to Cosmas Zavazava of the International Telecommunication Union, children are increasingly targeted through AI tools that can analyse their online behaviour, emotions and interests, enabling predators to tailor abuse and manipulation with alarming precision. Evidence from the COVID-19 pandemic showed that online abuse, particularly of girls and young women, often spilled over into real-world harm.
Child protection organisations report that AI is now being used to create explicit fake images of real children, fuelling new forms of sexual exploitation and extortion. Data from the Childlight Global Child Safety Institute highlights the scale of the problem, with technology-facilitated child abuse cases in the United States surging dramatically between 2023 and 2024, underlining how rapidly the threat landscape is evolving.
As awareness grows, governments are beginning to respond more forcefully. Australia became the first country to ban social media accounts for children under 16, citing evidence that large numbers of children are exposed to harmful, violent or distressing content and widespread cyberbullying, much of it on social platforms. Several other countries are now preparing similar laws or restrictions as they grapple with the risks posed by digital environments.
At the global level, a wide range of UN bodies published a Joint Statement on Artificial Intelligence and the Rights of the Child in January 2026, warning that societies are ill-prepared to manage the dangers AI poses to children. The statement points to widespread AI illiteracy among children, parents, teachers and caregivers, alongside gaps in technical expertise among policymakers on AI governance, data protection and child-rights impact assessments.
The statement also places responsibility on technology companies, noting that most AI-enabled tools and systems are not currently designed with children’s safety and well-being in mind. While acknowledging the private sector as an essential partner, UN officials stress the need to flag risks early and ensure responsible AI deployment that protects children without undermining innovation. Regular engagement with companies is ongoing to reinforce their obligations and encourage safer design practices.
UN agencies emphasize that protecting children in the digital age is fundamentally a children’s rights issue. Building on earlier updates to international child rights law that addressed online risks, they argue that clearer guidance and stronger regulation are now needed as children go online at younger ages. The newly issued child online protection guidelines aim to support parents, educators, regulators and industry in creating safer digital spaces, ensuring that AI development and use serve children’s best interests and protect them from harm.






