Skip to main content
Hiba* (beige shirt), 23, and Rama*, 19, taking a selfie together in Zaatari Refugee Camp, Jordan.

When AI Enables Abuse, Children Pay the Price - the time for action is now

9 Feb 2026 Global

Blog by Jeffrey DeMarco

Senior Technical Advisor, Protecting Children from Digital Harm, at Save the Children

In recent weeks, we have once again seen what happens when powerful AI tools are released at scale with weak safeguards. People use them to sexualise and humiliate others, and children end up dealing with the consequences.

Smart tech must come with safe choices built-in. AI can bring extraordinary benefits, but no innovation is worth a world where children have to live with the fear that any photo could be weaponised against them. 

Safer Internet Day this year focuses on Smart tech, safe choices – exploring the safe and responsible use of AI. The timing could not be better. In recent weeks, we have once again seen what happens when powerful AI tools are released at scale with weak safeguards. People use them to sexualise and humiliate others, and children end up dealing with the consequences.

This is not harmless internet humour. It is image-based sexual abuse as these are real photos manipulated into sexualised content without consent, sometimes involving children. UK regulators have opened a formal investigation into whether one major platform met its legal duties after concerns about AI-generated sexualised imagery.

A synthetic image can still be deeply violating. The impacts we hear about are immediate and real. They include shame, fear, bullying, isolation and a sudden loss of control over one’s body and identity. These images can also be used for coercion and sextortion, and they spread at a speed that outpaces any family’s ability to contain them.

The scale should concern anyone who works with or cares for children, including parents, social workers, teachers, medical professionals and police. The National Centre for Missing and Exploited Children (NCMEC)’s CyberTipline reports flagged that generative AI related to child sexual exploitation surged within a six-month period, jumping from thousands to hundreds of thousands. The Internet Watch Foundation has also warned of dramatic growth in AI-generated child sexual abuse material, adding to already record levels.

Of course, the individuals creating and sharing abusive content should be held to account. But prevention cannot rest on ‘do not click’ messages and takedowns that happen after the harm has been done and spread. When a product makes it easy to create sexualised imagery from a photo, responsibility also sits with those who design and deploy these features and profit from that capability, as well as with those who fail to act when misuse becomes clear and obvious.

This is why regulation matters. For example, in the UK, the Online Safety Act places duties on platforms to assess and mitigate risks from illegal content, including intimate image abuse, and to take proactive steps rather than relying only on user reports. The UK government has also set out measures aimed at stopping AI models from being misused to create synthetic child sexual abuse imagery at source, including stronger powers to test for and require safeguards on platforms.

If a tool can transform photos, it needs child-safety-by-design as a baseline and not a bolt-on or as an afterthought once damage has been done. Putting that simply, it means:

  • Blocking nudification and child sexualisation outputs through model restrictions and filtered workflows, with meaningful friction in high-risk features.
  • Detecting and removing illegal content fast, with clear escalation routes for urgent child safety cases.
  • Proving it works through transparent reporting and independent safety testing, with evidence that protections will hold at the scale of deployment.

Platforms should also make manipulated content easier to trace. Watermarking and provenance measures will not solve the problem alone, but they can help enforcement, media literacy and rapid takedown efforts, especially when combined with strong human review capacity for the highest risk reports.

Governments need to close loopholes so the creation and sharing of AI-generated sexual abuse material and non-consensual sexual deepfakes are clearly criminalised and enforceable, and then back that up with strong regulation and resourcing. AI collapses the cost of producing abusive content, so this means that policy and enforcement have to raise the cost of enabling it.

System change is essential, but families also need usable steps to protect children now. Some tips parents and caregivers can use to protect the children in their lives from harmful AI image generation include:

  • Talk early about AI manipulation and consent without shaming or scaring children into silence.
  • Tighten privacy settings, especially around who can view, message or download content.
  • If something happens, do not engage with threats and save evidence while it reporting quickly and getting the support you need.
  • Use specialist reporting routes. For example, in the UK, Report Remove with Childline and the IWF help under-18s confidentially report sexual images of themselves and seek removal. If a child is being sexually coerced or groomed online, report to CEOP. If you are unsure or don’t know where to go for help, check INHOPE's website, where you can access the numbers of national hotlines for reporting abuse or report known abuse or exploitation anywhere in the world through NCMEC’s CyberTip hotline.

Never frame this as the child’s fault. The responsibility is on abusers and on the systems that allow abuse to scale.

At Save the Children, we work with children, families, governments and industry to keep child protection aligned with technological change through evidence, advocacy and practical support that builds resilience without shifting responsibility onto children. That includes pushing for age-appropriate design, safer defaults and stronger accountability for the technologies that shape children’s lives.

This Safer Internet Day, the message is straightforward. Smart tech must come with safe choices built in. AI can bring extraordinary benefits, but no innovation is worth a world where children have to live with the fear that any photo could be weaponised against them. If we want a better internet, we have to design and regulate for it before the next tool goes viral.

 

Related Blogs