British Tech Companies and Child Safety Officials to Examine AI's Capability to Generate Abuse Images
Tech firms and child safety agencies will be granted authority to assess whether artificial intelligence tools can produce child abuse material under new UK legislation.
Significant Increase in AI-Generated Harmful Material
The declaration came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the authorities will allow designated AI companies and child protection groups to examine AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate protective measures to prevent them from producing depictions of child exploitation.
"Ultimately about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Experts, under rigorous protocols, can now identify the danger in AI models early."
Tackling Regulatory Challenges
The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that issue by helping to halt the production of those images at their origin.
Legal Structure
The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI systems developed to generate child sexual abuse material.
Practical Impact
This week, the official toured the London headquarters of a children's helpline and heard a simulated conversation to counsellors involving a account of AI-based abuse. The call portrayed a adolescent requesting help after facing extortion using a explicit AI-generated image of himself, constructed using AI.
"When I learn about children experiencing extortion online, it is a source of extreme anger in me and rightful concern amongst families," he stated.
Alarming Data
A prominent internet monitoring organization reported that instances of AI-generated abuse content – such as webpages that may include multiple files – had more than doubled so far this year.
Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the chief executive of the online safety organization.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a simple actions, giving criminals the capability to make possibly limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally exploits victims' suffering, and renders children, particularly female children, less safe on and off line."
Support Interaction Data
The children's helpline also published information of counselling sessions where AI has been mentioned. AI-related harms discussed in the sessions include:
- Employing AI to evaluate body size, physique and appearance
- Chatbots dissuading young people from consulting trusted guardians about abuse
- Facing harassment online with AI-generated material
- Digital blackmail using AI-manipulated pictures
Between April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and related terms were discussed, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapy applications.