Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Internet Investigation Report

C.4: Media reporting

79. In late 2018 and early 2019, a number of articles appeared in the media alleging that Google,[1] Microsoft[2] and Facebook[3] were allowing their services to be used by offenders to share child sexual abuse images and groom children. In advance of the hearing, the Inquiry provided witnesses from these companies with these articles, in order that they could respond to the contents.

80. In relation to Microsoft, one article stated that when terms such as ‘porn kids’ or ‘nude family kids’ were typed into Bing (Microsoft’s search engine), indecent images of children were returned in the results. Microsoft’s own investigations suggested that the images were not in fact illegal images but were sexually explicit images of individuals over the age of 18. As a result of the article, Microsoft made changes to Bing to ensure that adult content was not returned when search queries related to child sexual abuse or exploitation were made.

81. The article also stated that when seemingly innocent search terms were used, Bing auto-suggested search terms which led to indecent images. Microsoft accepted that common search terms should not deliver “suboptimal results”.[4] Mr Milward said that this article had prompted Microsoft to “fundamentally sit down and rethink the way in which we were devoting engineering attention to the challenge that we face here”.[5]

82. In December 2018, an article on the BBC news website[6] stated that apps were available to download on the Google Play Store which directed users to WhatsApp groups that were being used to share child sexual abuse images. On behalf of Google, Ms Canegallo explained[7] that a prospective app is reviewed before it is uploaded to the store to ensure it does not violate Google’s policies. It is then subject to periodic reviews and would also be reviewed if a user flagged the app for a suspected breach of policy. Ms Canegallo said she was confident that had such material been present at the initial review, the app would not have been available in the app store.[8] Despite the review process, however, it would appear that, in this example, the review did not detect the material. Google told us that, once aware of the issues raised in the article, the apps were suspended from the Google Play Store and the developer accounts were terminated. Two reports were made to NCMEC due to the content of the apps.

83. Following the BBC article, investigations[9] into WhatsApp revealed WhatsApp groups with names such as ‘Only Child Pornography’ and ‘Gay Kids Sex Only’. The article stated that a WhatsApp spokesperson had said:

Recent reports have shown that both app stores and communications services are being misused to spread abusive content, which is why technology companies must work together to stop it.[10]

84. When asked how WhatsApp prevents a group from having such titles and from sharing indecent imagery, Ms de Bailliencourt told us that WhatsApp uses PhotoDNA and has “some proactive detection mechanism in place to flag and pull down anything that may – that may appear to be of this nature”.[11]

85. One of the factors that prompted internet companies to review their current procedures, or consider future improvements, appears to be the reputational damage caused by adverse media reporting. Some changes we heard about were made as a result of negative publicity which impacts on their business model. It is this impact that seemingly drives or expedites revision and innovation as much as a concerted commitment to prevent access to indecent images of children.

Back to top