British Tech Companies and Child Protection Officials to Examine AI's Capability to Generate Exploitation Content

Tech firms and child safety organizations will receive authority to evaluate whether artificial intelligence tools can produce child exploitation images under new UK legislation.

Substantial Increase in AI-Generated Illegal Material

The declaration came as findings from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the authorities will permit designated AI developers and child protection organizations to inspect AI models – the foundational systems for conversational AI and image generators – and ensure they have sufficient protective measures to prevent them from creating images of child exploitation.

"Ultimately about stopping abuse before it occurs," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the risk in AI systems promptly."

Addressing Regulatory Obstacles

The changes have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at averting that issue by enabling to stop the creation of those images at their origin.

Legislative Structure

The changes are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, producing or distributing AI models designed to generate child sexual abuse material.

Real-World Impact

This recently, the official toured the London headquarters of Childline and listened to a mock-up conversation to counsellors involving a account of AI-based exploitation. The interaction portrayed a teenager requesting help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I hear about children experiencing blackmail online, it is a cause of extreme anger in me and rightful concern amongst parents," he said.

Alarming Data

A prominent internet monitoring foundation stated that instances of AI-generated exploitation content – such as webpages that may include numerous files – had more than doubled so far this year.

Instances of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of illegal AI images in 2025
  • Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "constitute a crucial step to ensure AI products are secure before they are released," commented the chief executive of the internet monitoring organization.

"AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the capability to create possibly limitless quantities of advanced, lifelike exploitative content," she added. "Material which additionally commodifies victims' trauma, and renders children, particularly female children, more vulnerable on and off line."

Counseling Session Data

The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:

  • Using AI to rate weight, physique and looks
  • AI assistants dissuading young people from talking to trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-faked images

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including using chatbots for assistance and AI therapeutic apps.

Amanda Hall
Amanda Hall

Elara is a sustainability consultant with over a decade of experience in energy policy and green technology, passionate about educating others.