UK Tech Firms and Child Safety Officials to Test AI's Capability to Create Exploitation Content

Tech firms and child safety agencies will be granted permission to evaluate whether AI tools can produce child abuse images under new British laws.

Significant Increase in AI-Generated Illegal Material

The declaration coincided with findings from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the authorities will permit designated AI developers and child safety organizations to examine AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate safeguards to stop them from creating depictions of child exploitation.

"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now detect the danger in AI models early."

Addressing Legal Challenges

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that issue by helping to stop the production of those materials at source.

Legislative Structure

The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI models developed to create child sexual abuse material.

Real-World Consequences

This recently, the minister toured the London headquarters of Childline and heard a simulated call to advisors involving a report of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.

"When I hear about young people facing extortion online, it is a source of intense frustration in me and justified concern amongst parents," he stated.

Alarming Statistics

A leading internet monitoring organization stated that cases of AI-generated exploitation material – such as online pages that may contain numerous files – had more than doubled so far this year.

Cases of the most severe material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, making up 94% of prohibited AI depictions in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a crucial step to ensure AI products are secure before they are launched," stated the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the capability to create possibly limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which additionally exploits victims' suffering, and renders young people, especially girls, more vulnerable on and off line."

Counseling Interaction Information

The children's helpline also released information of counselling sessions where AI has been mentioned. AI-related harms discussed in the conversations comprise:

  • Using AI to evaluate weight, body and looks
  • Chatbots dissuading children from consulting safe guardians about abuse
  • Being bullied online with AI-generated content
  • Digital extortion using AI-faked images

Between April and September this year, the helpline conducted 367 support interactions where AI, conversational AI and associated terms were mentioned, four times as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapeutic apps.

Julia Miller
Julia Miller

A seasoned sports analyst with over a decade of experience in betting strategies and market trends.