UK Technology Firms and Child Safety Agencies to Test AI's Capability to Generate Exploitation Images

Tech firms and child protection organizations will be granted permission to evaluate whether AI systems can produce child abuse material under recently introduced UK legislation.

Significant Increase in AI-Generated Harmful Material

The announcement coincided with findings from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the authorities will permit approved AI companies and child safety groups to inspect AI models – the foundational systems for chatbots and image generators – and verify they have adequate protective measures to prevent them from producing images of child sexual abuse.

"Ultimately about stopping abuse before it happens," declared Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now identify the risk in AI systems promptly."

Addressing Regulatory Obstacles

The changes have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at preventing that issue by enabling to stop the creation of those images at their origin.

Legislative Framework

The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI models developed to create child sexual abuse material.

Practical Consequences

This week, the minister visited the London base of Childline and heard a mock-up call to counsellors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a explicit deepfake of himself, created using AI.

"When I learn about children facing blackmail online, it is a cause of extreme frustration in me and rightful anger amongst parents," he stated.

Concerning Statistics

A leading online safety foundation stated that cases of AI-generated exploitation content – such as online pages that may contain numerous images – had more than doubled so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, making up 94% of illegal AI depictions in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a vital step to guarantee AI products are secure before they are released," commented the chief executive of the online safety organization.

"AI tools have made it so victims can be targeted all over again with just a simple actions, providing offenders the capability to make possibly endless quantities of sophisticated, photorealistic exploitative content," she continued. "Content which additionally exploits victims' trauma, and renders young people, particularly girls, less safe on and off line."

Counseling Session Information

The children's helpline also released details of counselling sessions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Using AI to evaluate body size, body and looks
  • Chatbots dissuading children from talking to trusted guardians about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked images

During April and September this year, Childline conducted 367 support sessions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapeutic applications.

Jason Baker
Jason Baker

A passionate coffee roaster and writer with over a decade of experience in specialty coffee and sustainable sourcing practices.