UK Technology Companies and Child Safety Agencies to Test AI's Ability to Create Abuse Images
Technology companies and child safety agencies will receive permission to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced UK laws.
Significant Rise in AI-Generated Harmful Content
The announcement came as revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will permit designated AI companies and child safety groups to inspect AI models – the underlying systems for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from creating images of child exploitation.
"Ultimately about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under strict protocols, can now detect the danger in AI models early."
Addressing Legal Obstacles
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that problem by enabling to stop the creation of those materials at source.
Legal Structure
The amendments are being added by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or sharing AI models developed to create child sexual abuse material.
Practical Impact
This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up conversation to counsellors featuring a report of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, constructed using AI.
"When I hear about young people facing blackmail online, it is a source of intense anger in me and justified concern amongst parents," he stated.
Concerning Data
A prominent online safety foundation reported that cases of AI-generated abuse material – such as online pages that may include multiple files – had more than doubled so far this year.
Cases of the most severe material – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
- Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a vital step to guarantee AI products are secure before they are launched," stated the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, giving offenders the capability to make possibly endless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Material which additionally exploits victims' suffering, and renders young people, particularly girls, less safe on and off line."
Counseling Session Data
Childline also released details of counselling sessions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Using AI to rate weight, body and appearance
- Chatbots discouraging young people from talking to trusted guardians about abuse
- Facing harassment online with AI-generated material
- Digital extortion using AI-manipulated pictures
Between April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and related terms were discussed, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellness, including using AI assistants for assistance and AI therapeutic apps.