Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Internet Investigation Report

Executive Summary

This investigation focusses on the growing problem of online-facilitated child sexual abuse. The increase in access to and use of the internet has brought undeniable benefits to society. It has also enabled a section of society to misuse the internet to distribute indecent images of children; groom and manipulate children in order to commit sexual acts on them; and live stream the sexual abuse of children from around the world.

The harm done to children and their families is incalculable. We heard evidence from victims and their families about the devastating and long-term impact that this abuse has on them. Those affected live in fear that images of them being sexually abused remain available on the internet. Parents described their children being groomed as “any parent’s nightmare”.[1]

Scale of online-facilitated child sexual abuse

There are millions of indecent images of children in circulation worldwide. The word ‘indecent’ describes a spectrum of offending, some of which reaches unprecedented levels of depravity and includes the rape and torture of babies and toddlers. Although the dark web often hosts images of the most deviant kind, the vast majority of sites that host indecent images of children are available on the open web and potentially accessible to a worldwide audience.

In 2015, BT found that “the average number of attempts to retrieve the CSA image was 36,738 every 24 hours”.[2] Extrapolate that data across all the internet service providers, and the number of attempts to access indecent images of children per day is alarmingly high.

Several police forces reported a rise in offences of online grooming. According to the National Society for the Prevention of Cruelty to Children (NSPCC), between April and September 2018, police recorded more than 10 grooming offences a day. Facebook, Instagram and Snapchat are frequently named as the most common platforms where grooming takes place.

It is wrong to assume that the live streaming of child sexual abuse does not involve children from the UK. The Internet Watch Foundation (IWF) frequently encounters images of live streams which involve children from Western backgrounds, the majority of whom are girls aged between seven and 13 years old. The sums paid to watch and in some cases direct the abuse are trivial, sometimes costing little more than one pound, thereby offering encouragement to would-be offenders to engage in child sexual abuse on a significant scale.

The true scale of offending and the number of children who have been victims of online-facilitated child sexual abuse is likely to be far higher than the number of reported offences.

The volume of online child sexual abuse and exploitation offences referred to law enforcement undoubtedly “represents a broader societal failure to protect vulnerable children”.[3]

This investigation examined the response of law enforcement, industry and government to online-facilitated child sexual abuse by considering the response to three types of offending: indecent images of children offences; the grooming of a child; and live streaming of child sexual abuse.

Indecent images of children

There have been significant efforts by internet companies to detect indecent images of children on their platforms and services. The development of PhotoDNA in 2009 greatly increased the ability of internet companies to detect known (ie previously identified) child sexual abuse imagery. Other technological developments now exist to identify newly created or previously unidentified indecent images and videos.

The IWF has made remarkable progress in removing child sexual abuse material from web addresses that are hosted in the UK. When the IWF was set up in 1996, the UK hosted 18 percent of the worldwide total of online child sexual abuse imagery. By 2018, the figure was 0.04 percent.

The increase in detection and the removal of indecent images is important but this does not address the issue of ease of access to this imagery. It is still possible to access indecent images of a child from common search engines in only “three clicks”.[4] The internet companies must do more to pre-screen material before it is uploaded to their platforms and systems. The Inquiry considers that preventing a user from accessing child sexual abuse material is a vital and necessary step in the fight against possession and distribution of indecent images of children.

Online grooming

There has been a rapid escalation in the number of children being groomed on the internet and, in particular, on social media platforms. Most internet companies either prohibit or discourage children under 13 years old from accessing their platforms or services.

However, we repeatedly heard evidence that children under 13 easily gained access to their services and that under 13-year-olds, especially girls, are at significant risk of being groomed. The internet companies failed to demonstrate that they were fully aware of the scale of underage use. The lack of a comprehensive plan from industry and government to combat this problem should be urgently addressed.

The Inquiry heard that collaboration between industry, law enforcement and government has resulted in a number of technological developments that help detect grooming. However, the Inquiry is not confident that internet companies are doing all they could to tackle online grooming on their platforms. More needs to be done than simply deploying existing technologies, many of which will not work where communication is encrypted. Encryption poses a real risk to the ability of law enforcement to detect and investigate online-facilitated child sexual abuse.

Live streaming of child sexual abuse

The institutional response to live streaming is not as well developed as the responses to the grooming of children and the possession and distribution of indecent images of children. The ability of industry and law enforcement to detect child sexual abuse that is being live streamed is difficult given the real-time nature of the broadcast. The use of human moderators to monitor live streams is therefore a key feature of the response. We are unconvinced that internet companies fully understand the scale of the problem of live streaming on their platforms such that they can properly assess whether they employ sufficient numbers of moderators to detect such offending.

The response of industry and government

We repeatedly heard evidence from industry witnesses that their respective companies were committed to trying to prevent online-facilitated child sexual abuse. Industry’s response was, at times, reactive and seemingly motivated by the desire to avoid reputational damage caused by adverse media reporting. Transparency reports published by the internet companies provide only part of the picture and there is a lack of guidance and regulation setting out the information that must be provided.

The government response includes the introduction in September 2020 of compulsory education in both primary and secondary schools that will help teach children about the need to stay safe online. The government also published its Online Harms White Paper aimed at tackling a wide range of online harms, including the threat of online child sexual abuse and exploitation. The Queen’s Speech in December 2019 included reference to the introduction of legislation to establish a new regulatory framework. The Online Harms proposals are wide-ranging but the timetable for implementation of this legislation is unclear. The prospective interim code of practice in respect of child sexual abuse and exploitation offers a very real opportunity to make children in the UK safer online. We therefore unhesitatingly recommend that the interim code is published without further delay.

This recommendation, along with the Inquiry’s other recommendations, aims to encourage greater collaboration between industry, law enforcement and government to put in place a strengthened and more rigorous regime to address the harm caused by online-facilitated child sexual abuse.

References

Back to top