Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Internet Investigation Report

F.3: Transparency reports

The content of transparency reports

22. The White Paper also proposes that the regulator will have the power to require companies to provide annual transparency reports “outlining the prevalence of harmful content on their platforms and what countermeasures they are taking to address these”.[1] It envisages that the transparency reports will include details about the procedures the company has in place for reporting illegal (and harmful) content, including the number of reports received and how many of those reports led to action being taken. The reports will also include information about what proactive steps or tools the company uses to prevent and detect illegal content and detail about its cooperation with UK law enforcement.

23. The publication of transparency reports is not a concept that is new to the internet industry. Google has been publishing such reports since 2010 and Facebook, Apple and Microsoft since 2013.

24. There is, however, no consistency in the content of the reports. For example, Apple’s reports focus upon the number of government requests it receives for information about emergency cases (where there is a risk of death or serious injury), accounts or devices, or ‘financial identifiers’ (to assist in cases of suspected fraud). Microsoft’s reports look at the number of law enforcement requests and whether the request is for content or non-content data from a Microsoft account. Google’s and Facebook’s reports include some details about the amount of content that is removed from their services and the reasons for that removal.

25. Facebook was asked about its transparency report in respect of ‘Child Nudity and Sexual Exploitation of Children’, published in November 2018.[2]

25.1. In response to a question in the transparency report ‘How prevalent were child nudity and sexual exploitation violations on Facebook?’, Facebook replied “we can’t reliably estimate it.[3] The report says Facebook took action on 8.7 million pieces of content (in the quarter July to September 2018) and that 99.2 percent of this content was flagged and removed before users reported it to the company. What is not set out though is any real context to these figures. For example, these figures include both illegal images of child sexual abuse and lawful images of child nudity – it is therefore not possible to ascertain how much illegal content was found on Facebook. Second, while the removal of millions of pieces of content is significant, the report does not state how much general content was uploaded to Facebook in this period. It is difficult to assess therefore whether these figures represent a ‘success story’ or are being used to mask an underlying problem in the way Facebook tackles child sexual abuse material.

25.2. We were told that Facebook could not express “the prevalence related to child sexual exploitation in a way that is accurate yet”.[4] Ms Julie de Bailliencourt, Facebook’s Senior Manager for the Global Operations Team, explained that Facebook was working with the Data Transparency Advisory Group (based at Yale University) to ensure that Facebook was approaching its data collection in the “right way”.[5] She said that “Adult nudity is more prevalent on the platform than child sexual exploitation”.[6] When asked how Facebook could make such an assertion if the amount of child sexual abuse and exploitation content was not known, she said:

the amount of time our team may encounter child sexual exploitation versus other types of violating content is minimal”.[7]

26. Google’s transparency report for April to June 2018[8] records that YouTube removed nearly 7.8 million videos for breach of its Community Guidelines in the quarter. Of those, 88 percent were identified as a result of automated flagging. In the same quarter, YouTube removed over 9.6 million videos that had been reported by human flaggers (including trusted flaggers[9]). The report states that where a human flagger reports a video, the human flagger can select a reason for their report and that 27.4 percent of reviewers selected ‘sexual’ as the reason.

27. It would be wrong to assume that 27.4 percent of content removed from YouTube related to sexual offending. The data only records the reason the reporter gave for flagging the video and does not inform the reader if the video did in fact breach the Community Guidelines and, if so, whether the content was illegal and/or related to child sexual abuse and exploitation. Ms Kristie Canegallo, Vice President and Global Lead for Trust and Safety at Google, explained that Google “continually update the transparency report to provide more information[10] and that “there would be more information around child safety in subsequent reports”.[11]

28. In relation to the transparency reports, Mr Carr was of the view that Google and Facebook “tell us what they think they want to be transparent about”.[12] He said:

And they’re very reluctant to disclose, as you can imagine, exactly what scale of illegal activity is taking place on their platform, but I think we have a right to know”.[13]

29. Mr Tony Stower, Head of Child Safety Online at the National Society for the Prevention of Cruelty to Children (NSPCC), was equally critical of the reports.

The crucial point is, here, that they are deciding what to be transparent about … and that makes it completely impossible for any parent, or indeed any child, to compare the services and make an informed choice.”[14]

30. Transparency reports are important to the public’s ability to scrutinise industry’s efforts to combat online-facilitated child sexual abuse. The Inquiry heard repeatedly from industry witnesses that their respective companies were doing all they could to detect and prevent their platforms from being used to facilitate child sexual abuse. It is difficult at present to assess the accuracy or otherwise of those assertions. There needs to be consistency in respect of the information a company provides about the amount of child sexual abuse content on their platforms or services. This could include, for example, data about the number of reports made to the National Center for Missing & Exploited Children (NCMEC), how many accounts were closed for child sexual abuse and exploitation violations, how many requests the company receives from law enforcement for detail in respect of child sexual abuse and exploitation investigations, and how much illegal content was found as a result of proactive detection technology and/or because of human reporting.

‘Naming and shaming’

31. One of the proposals of the White Paper is to publish public notices setting out where a company fails to comply with the regulations/regulator. Mr Carr said that in his experience “the threat of naming and shaming is one of the few weapons that seems to work reliably with internet companies”.[15]

32. Earlier parts of this report have considered the ways in which the industry responded in 2018/19 to reports in the media of child sexual abuse content being found on their platforms. Invariably, once alerted to the problem, the companies were quick to take action.

33. Mr Robert Jones, Director of Threat Leadership for the NCA, was asked why the NCA does not routinely ‘name and shame’ those companies that law enforcement considers are failing to respond to the growing online threat. Mr Jones said that when dealing with the companies individually there was “good and regular dialogue[16] and that, generally speaking, when the NCA made a request for intelligence or evidence, that information was provided. He considered that the companies’ responses were reactive but that the “proactivity of going on the front foot to … meet this threat, isn’t what we would like it to be”.[17]

34. Mr Jones explained that the NCA did hold joint forums with industry but that it was “very, very difficult to get the level of openness and transparency amongst all of the companies at the same time”.[18] He thought that it would be “unfair” to name and shame a company without providing “operational context”.[19] From the NCA’s perspective:

the challenge for us is that calling out one company doesn’t help, because the internet is a global phenomenon and we need everybody to get behind the objective of reducing access to these images”.[20]

35. Chief Constable Bailey was asked about a May 2019 press release[21] in which he advocated a public boycott of social media. He told us that whilst he was “proud” of the work done by law enforcement to protect children, he did not consider that efforts to raise the public profile in respect of online child sexual abuse had received “the public impact in terms of outrage at what has actually taken place”.[22] He said that the power of the regulator to impose fines would be an appropriate and effective sanction for some companies but that “for some of these companies, who are worth billions, then actually a fine is a drop in the ocean”.[23] It was for this reason that he advocated a boycott because, as he said in the press release:

Ultimately … the only thing they will genuinely respond to is when their brand is damaged.”[24]

36. In the event of a failure to comply with the regulations or the regulator, the power to name and shame is an important tool for the regulator.

Back to top