Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Report of the Independent Inquiry into Child Sexual Abuse

Final report

F.3: Detecting online child sexual abuse

21. Much online-facilitated child sexual abuse, such as grooming, is similarly hard to detect. Online perpetrators frequently disguise their true age and identity – often masquerading as children in an attempt to gain the child’s trust. Some online messaging services are designed so that messages and images are automatically deleted, thereby enabling perpetrators to evade detection. For example, the Inquiry’s research found that Snapchat was often the social media platform “of choice” for perpetrators and that social media and technology such as this has exacerbated the prevalence of child sexual abuse because it offers perpetrators more opportunities to access children and offend while remaining undetected.[1]

22. Internet and social media companies, including Google, Meta (formerly Facebook, which owns WhatsApp and Instagram), Microsoft and Apple, have developed technology to detect online-facilitated child sexual abuse. The methods of detection vary depending on the type of abuse.

Child sexual abuse images

23. The number of people accessing child sexual abuse images continues to grow. During a one-month period during the first 2020 lockdown, the Internet Watch Foundation estimated there were 8.8 million attempts by UK internet users to access child sexual abuse imagery.[2]

24. Techniques for detecting child sexual abuse images vary, depending on whether the image has previously been identified by law enforcement or industry as a child sexual abuse image (referred to as a ‘known’ child sexual abuse image). PhotoDNA, web crawlers, Artificial Intelligence (AI), machine learning and classifiers are all used to detect imagery, with many companies making their technology available to other internet companies.[3]

25. Where known child sexual abuse images are detected, US law requires Electronic Service Providers (ESPs) based in the US to report child sexual abuse material to the National Center for Missing & Exploited Children (NCMEC). They must provide information about the suspected perpetrator, such as an email address or Internet Protocol (IP) address.[4] In practice, all the major internet companies (including Meta, Google and Microsoft) are subject to this legal requirement. In 2018, this resulted in 18.4 million reports being made to NCMEC.[5] In 2021, NCMEC received nearly 30 million reports.[6]

26. NCMEC sends reports relating to the UK to the National Crime Agency (NCA). The NCA responds to the most serious reports itself, and passes others on to local police forces for them to investigate and make any necessary arrests.[7] This form of mandatory reporting has therefore had a significant positive impact on the way US institutions report child sexual abuse material and an equally positive impact in assisting UK law enforcement to identify perpetrators based in the UK. Once the Online Safety Bill is passed, UK companies will be under a duty to report any child sexual exploitation and abuse content that they encounter to the NCA.[8]

Pre-screening

27. Pre-screening enables internet companies to prevent child sexual abuse images from ever being uploaded to platforms and social media profiles. The images cannot therefore be viewed or shared, preventing access to the material.

28. In August 2021, Apple announced that it had developed technology to scan US-based user devices for known child sexual abuse images before an image is stored on its cloud storage service, iCloud. It was intended that the implementation of this feature would be kept under review before being rolled out worldwide. Subsequently, Apple announced that it was delaying these plans, pending “improvements before releasing these critically important child safety features”.[9] Where a match is found, the image will be reviewed by a human reviewer and, if the image contains child sexual abuse material, a report will be made to NCMEC.[10] This type of pre-screening for known indecent images is welcome.

29. In The Internet Investigation Report, published in March 2020, the Inquiry recommended that the government should require internet companies to pre-screen for known child sexual abuse images before material is uploaded.[11] In its response, the government referred to the Interim Code of Practice on Online Child Sexual Exploitation and Abuse (Interim Code) (see Part J) which sets out the government’s “expectation that all companies will prevent access to known child sexual abuse material”.[12] The first principle in the Interim Code states that companies “seek to prevent known child sexual abuse material from being made available to users or accessible on their platforms and services, take appropriate action under their terms of service, and report it to appropriate authorities”.[13] The government’s response went on to state that “Pre-screening is one means of preventing access, recognising that this threat and the response that it requires may vary depending on the type and nature of the service offered”.[14]

30. The Interim Code sets out what is ‘expected’ of companies but this does not go far enough given that the technology to pre-screen exists and is effective in preventing known child sexual abuse material being made available to users. In due course, it will be for the Office of Communications (Ofcom) as the online safety regulator to issue the code of practice, but the Inquiry considers that it is imperative that pre-screening is utilised to its fullest extent and becomes a mandatory feature of the code of practice.

31. In March 2022, the Online Safety Bill was laid before Parliament. It is not known how long it will take for legislation to be enacted and come into force, what provisions will be enacted and in what precise form. However, ensuring that users do not encounter child sexual abuse material is imperative, irrespective of the type and nature of the service offered and irrespective of any possible amendments to the Bill. The Inquiry therefore recommends that pre-screening for known child sexual abuse images should be a mandatory feature of the code of practice.

Recommendation 12: Pre-screening

The Inquiry recommends that the UK government makes it mandatory for all regulated providers of search services and user-to-user services to pre-screen for known child sexual abuse material.

Online grooming

32. In the year ending September 2021, police forces in England and in Wales recorded 6,833 grooming offences, an approximate 53 percent rise from the 2017/18 recorded figures.[15]

33. In addition to police officers operating undercover in internet chatrooms and forums used by suspected offenders, many of the internet companies use human moderators to review content and take action where there is a breach of the company’s online policies, not just those policies relating to child sexual abuse.[16] Not every internet company provided the Inquiry with the numbers of moderators they employed and, even where those figures were provided, it was far from clear that the numbers were sufficient to meet the increase in online-facilitated child sexual abuse.[17] Given the escalation in grooming offences, internet companies will need more moderators to complement and add to their technological means of identifying abuse. In addition, the internet companies need to be alert to the difficult and traumatic material that moderators can be exposed to and be careful to pay attention to their welfare.

34. The internet companies also use classifiers to detect not just key words but patterns of behaviour that might indicate grooming is taking place. In 2018, the Home Secretary convened a Hackathon (a collaborative event for computer programmers) attended by all the major internet companies. In just 48 hours, engineers from those internet companies developed a prototype technology that could potentially be used to flag conversations that might be indicative of grooming. Following a second ‘mini’ Hackathon in 2019, the technology was launched in 2020.[18] These collaborative conferences brought about significant technological developments within a very short time, and ought to be a regular and ongoing feature of the response to online child sexual abuse.

Live streaming

35. A large proportion of victims of live streaming come from poorer countries, often from Southeast Asia. However, in 2018, the Internet Watch Foundation published research which examined the distribution of captures of live streamed child sexual abuse and indicated that it more frequently encountered images “involving white girls, apparently from relatively affluent Western backgrounds”.[19] Live streaming is a problem affecting children in England and in Wales.

36. In addition, demand for live streamed sexual abuse is seemingly fuelled by individuals within the UK. The WeProtect Global Threat Assessment 2021 notes that live streaming for payment has increased, exacerbated by the COVID-19 pandemic, with the ‘consumers’ of this material coming “predominantly from Europe, North America and Australia”.[20]

37. The speed and real-time nature of live streaming make it extremely difficult to police interactions between the live streamer and the recipient as they happen. The practical effect of this is that it is harder for industry to deploy technology to detect, moderate or prevent live streamed child sexual abuse material. The internet companies deploy some technology to detect potentially inappropriate comments that are often posted alongside a live stream, but it is clear that further investment is required to detect this form of online-facilitated abuse.

Detection in the future

38. There remain a number of notable impediments to the future detection of child sexual abuse material, including the increased use of the dark web (discussed in Part J). While the majority of websites that host indecent images of children are accessed via the open web, offending also takes place on the dark web. This is part of the world wide web that is only accessible by means of specialist web browsers, and cannot be accessed through well-known search engines. As set out in the Internet Investigation Report, at any one time, the dark web is home to approximately 30,000 live sites, just under half of which are considered to contain criminal content, including, but not limited to, child sexual abuse and exploitation content.[21] It hosts some of the most depraved and sickening child sexual abuse imagery and material.

39. One of the most significant impediments to detection is end-to-end encryption (E2EE). Encryption is the process of converting information or data into a code that makes it unreadable to unauthorised parties. Many means of communication, such as WhatsApp, iMessage and Facetime, are subject to end-to-end encryption, which means that the content of the communication can only be seen by the sender and the recipient. This means that law enforcement and the providers of the messaging platform cannot access the content (unless they are physically in possession of the handset or device). It also means many of the technological means of detecting online offending simply do not work.

40. The practical effect of the increased use of end-to-end encryption has significant implications.

40.1. In 2019, Facebook announced its intention to introduce end-to-end encryption on the Facebook Messenger and Instagram platforms. In 2020, Facebook provided 20.3 million child sexual abuse referrals to NCMEC. NCMEC’s previous assessment is that 70 percent of Facebook’s total referrals relate to Messenger (Facebook’s instant messaging service that also allows images and videos to be shared), and so the number of referrals is likely to significantly diminish once that service is end-to-end encrypted.[22] As Mr Rob Jones, Director of Threat Leadership at the NCA commented, Facebook’s move to end-to-end encryption would “take away” the “crown jewels from the online protection response”.[23]

40.2. Since the start of 2019, Project Arachnid (a web crawler that searches for child sexual abuse material) has detected more than 5,500 pages on the dark web hosting child sexual abuse material.[24] However, because the identity of the server is anonymised, notices requesting removal of the material cannot be sent. Project Arachnid has also detected a large volume of child sexual abuse material related to prepubescent children that is made available on dark web forums but actually sits on open web sources in encrypted archives. By virtue of encryption, scanning techniques cannot detect the imagery.

41. The government’s Interim Code states that there is:

detailed guidance for companies to help them understand and respond to the breadth of CSEA [child sexual exploitation and abuse] threats, recognising that this threat and the response that it requires will vary depending on the type and nature of the service offered … we encourage all companies to be proactive and ambitious in how they consider and implement the recommendations within this interim code of practice.”[25]

42. While the Interim Code acknowledges the threat posed by encryption and requires companies to consider the potential harm created by it (including how the risk of this harm might be mitigated), it falls short of proposing any solution to the problem. In addition, the Information Commissioner’s Office (ICO) recognises that the balance between addressing concerns about online safety and the need for keeping personal data secure and private (brought about by end-to-end encryption) is a difficult one. The ICO considers that “positioning E2EE and online safety as being in inevitable opposition is a false dichotomy” and that a more “nuanced and detailed understanding of the broader issues” is required.[26] To that end, it engaged with the government’s ‘Safety Tech Challenge Fund’, which aims to:

encourage the tech industry to find practical solutions to combat child sexual exploitation and abuse online, without impacting people’s rights to privacy and data protection in their communications”.[27]

43. In November 2021, the government announced that £555,000 had been awarded to five projects as part of the Safety Tech Challenge Fund. One of the projects will develop a plug-in to be integrated within encrypted social platforms to detect known child sexual abuse material.[28] This forms part of wider spending commitments by the Home Office which, in the financial year 2022/23, exceed £60 million.[29]

44. Technological advances such as these projects are positive steps but more is required. The Online Safety Bill proposes giving Ofcom the power to require providers to use “accredited technology” to identify child sexual exploitation and abuse content whether “communicated publicly or privately” and to take that content down.[30] In July 2022, the Home Office announced an amendment to the Bill to give Ofcom the power to issue a company with a Notice to use “best endeavours” to develop technology to prevent, identify and remove child sexual abuse material, including on services that are encrypted.[31]

45. If enacted, it may be that Ofcom will require specific technologies to be deployed on encrypted services but this is a measure of last resort. It does not detract from the reality that encryption represents a serious challenge to the detection of online-facilitated child sexual abuse, and is likely to result in child sexual abuse offences going undetected.

46. While there is an ever-increasing awareness of the need to protect personal data and online privacy, the emerging regulatory landscape must ensure that there is effective protection of children from online-facilitated sexual exploitation and abuse. That must remain the priority.

References

Back to top