UK Technology Firms and Child Protection Officials to Test AI's Ability to Create Abuse Content
Technology companies and child safety organizations will receive permission to evaluate whether AI tools can produce child exploitation material under recently introduced UK laws.
Substantial Rise in AI-Generated Harmful Content
The announcement came as findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the government will allow designated AI companies and child protection organizations to examine AI models – the underlying systems for chatbots and visual AI tools – and verify they have adequate protective measures to prevent them from creating images of child sexual abuse.
"Fundamentally about preventing exploitation before it happens," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the danger in AI systems promptly."
Addressing Legal Challenges
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to preventing that problem by enabling to halt the creation of those images at source.
Legislative Framework
The amendments are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on owning, creating or distributing AI systems developed to create exploitative content.
Real-World Consequences
This recently, the official visited the London headquarters of Childline and heard a mock-up conversation to counsellors featuring a account of AI-based exploitation. The call portrayed a adolescent requesting help after being blackmailed using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about young people facing extortion online, it is a cause of intense frustration in me and justified concern amongst families," he stated.
Concerning Statistics
A prominent internet monitoring foundation stated that instances of AI-generated exploitation content – such as online pages that may contain numerous files – had more than doubled so far this year.
Cases of the most severe content – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, accounting for 94% of prohibited AI images in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a simple actions, giving criminals the ability to create possibly limitless quantities of advanced, photorealistic child sexual abuse material," she added. "Material which further commodifies victims' suffering, and makes young people, particularly female children, more vulnerable on and off line."
Counseling Interaction Data
Childline also published information of support sessions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Employing AI to rate body size, body and appearance
- Chatbots discouraging young people from talking to safe adults about abuse
- Facing harassment online with AI-generated material
- Digital extortion using AI-faked images
Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for assistance and AI therapeutic applications.