Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Internet Investigation Report

E.3: Media reporting

22. Ms Canegallo was asked about an article in The Times in December 2018.[1] The article suggested that perpetrators were posting comments on the live chat section of a live stream which encouraged children to take off their clothes or pose in sexualised positions and that YouTube failed to remove live streamed videos that showed the sexual abuse of children. The article said:

YouTube acknowledged that paedophiles had found a way to target children on the platform and ‘it recognised there’s still more to do’.[2]

23. Ms Canegallo told us that Google had investigated the matters raised in the article prior to its publication. As a result, 22 of the 37 videos (the videos had originally been live streams) were removed for violating Google’s child safety policies. Google also analysed the comments and live chats associated with the 37 videos which resulted in 75 accounts being terminated and some referrals made to NCMEC.[3] Ms Canegallo said that Google had “dramatically improved[4] its comments classifier and that “the improvements in our comment classifier was not in response to this article[5] but had been work that was ongoing throughout 2018.

24. In light of this response, Ms Canegallo was asked about a second newspaper article that appeared in The Guardian on 21 February 2019.[6] The article raised concerns about the comments section on YouTube. As the article explained, the YouTube videos themselves did not contain child sexual abuse material and were in fact videos of young girls playing, exercising and doing gymnastics. However, comments posted alongside those videos included sexual comments about children and “shared tips on when to pause the videos to take compromising still images of the children”.[7] The videos were accompanied by advertisements placed by companies such as Fortnite[8] and Disney, causing those companies to remove their adverts from YouTube.

25. The article also alleged that YouTube’s ‘Watch Next’ feature recommended more videos of children with similar comments.

After watching a few such videos on a new YouTube account … the site’s algorithm – designed to provide users with content they might like, to keep them watching – would serve up endless videos of apparently underage children where the comments section contained inappropriate comments.[9]

26. In the article, YouTube commented that the company had taken:

immediate action by deleting accounts and channels, reporting illegal activity to authorities and disabling comments on tens of millions of videos that include minors. There’s more to be done, and we continue to work to improve and catch abuse more quickly.[10]

Ms Canegallo explained to us that Google turned off the comments section because “[w]e saw that the comments classifier was not working as well as we wanted it to”.[11] She told us that Google is continuing to try and improve the comments classifier and is working on the ‘Watch Next’ algorithm to try and mitigate the risk of recommending inappropriate content.

27. Google reviewed the videos referenced in The Guardian article (including any comments). As a result, 360 accounts were terminated for violation of Google’s policies including “in large part[12] violations related to child sexual abuse material. Ms Canegallo said that she would have thought that the withdrawal of advertisements by the companies would have led to a loss of revenue to Google. When asked if she thought that the financial loss was the motivation behind Google’s efforts to combat the problems highlighted by the article she said:

the work that the YouTube team has been doing throughout 2018, some of which has come to fruition recently, is the result of continued effort on the part of the team that was not prompted by any one article or news inquiry”.[13]

28. In summer 2018, BT invested £100,000 to fund research into how machine learning techniques could help combat live streaming. Mr Kevin Brown, Managing Director of BT Security, told us that this investment arose following a meeting between BT’s Chief Executive and the NCA, where the NCA explained the trends that were emerging in respect of live streaming. The NCA asked if, and how, BT could help from a technological perspective. Mr Brown explained that in a typical live stream of child sexual abuse and exploitation, the perpetrator:

would join a video to the end destination where the abuse was actually taking place and, therefore, you focus in on the traffic behaviour, which … wouldn’t be consistent with a normal conversation as if myself and you were over a Skype conversation … the characteristics would be significantly different”.[14]

29. Mr Brown said that this technology was still at the testing stage but was due to be discussed at a round table meeting with the Home Secretary focussed on the issue of live streaming.[15] That meeting took place on 21 May 2019. Mr Christian Papaleontiou, Head of the Home Office’s Tackling Exploitation and Abuse Unit, gave an update on this meeting when he gave evidence at the public hearing the following day. He explained that the Home Office had established the Joint Security and Resilience Centre (JSaRC) to work “with industry to respond to emerging security challenges”.[16] Through JSaRC, the Home Office had a £250,000 fund available and invited bids from technology companies that were looking “to develop technical, technological solutions to tackle live streaming”.[17]

30. Five projects were successful in bidding for the fund.[18] They include:

  • a project that takes existing techniques used in processing still imagery and applies those techniques to live streaming;
  • technology that can analyse video streams and automatically link content depicting the same individuals or locations to assist in identifying victims and offenders;
  • development of a tool that identifies, disrupts and prevents child sexual abuse and exploitation by analysing viewers’ comments around the live streams; and
  • using machine learning to analyse video streams and automatically detect child sexual abuse and exploitation content.

31. The Home Secretary announced a further £300,000 to help these projects develop. As Mr Papaleontiou said:

this is government trying to take a lead and show leadership in terms of identifying solutions … we want to work with and pick up with industry in terms of how we can … deploy some of those companies’ technical capabilities and technological capabilities to build on that and advance those projects or, indeed, other projects”.[19]

32. In terms of how law enforcement and industry work together, Mr Robert Jones, Director of Threat Leadership for the NCA, gave a number of examples where a collaborative approach was beneficial in tackling live streaming. In particular, he identified Yubo as being a company that took positive steps to make the platform safer for children. Yubo (formerly called Yellow) is a social media app created in France that allows users to create live videos. It reportedly has approximately 20 million users. Mr Jones told us that Yubo was initially criticised for having no age verification or privacy controls. As a result, the app provided perpetrators with the opportunity to masquerade as a child and thereby groom children and live stream the abuse.

33. One of the ways in which Yubo enhanced its child safety measures was to live moderate live streaming. Yubo used algorithms to help detect child nudity. Where detected a moderator will:

drop into live streams … and tell underage users to effectively cease and desist, to put their clothes back on, to stop. If that doesn’t happen, they will potentially lock that account.[20]

34. Mr Jones said that Yubo’s approach to moderation was shared by the NCA with other industry companies. He said:

there is nothing in this which other industry providers don’t know about. The issue is scale and, you know, that is something that can be solved with investment.[21]

35. Mr Jones also told us about a live streaming platform that, following feedback from the NCA, changed its reporting systems to NCMEC to provide additional information that would assist in identifying the perpetrator’s account or accounts.[22]

36. In the context of online-facilitated child sexual abuse, live streaming is a relatively new phenomenon and, as such, the law enforcement and industry response is not as well developed as it is in respect of grooming and the viewing of indecent images. It is important for companies to understand the scale of the problem on their platforms and ensure they have sufficient numbers of moderators to monitor and review suspected live streaming of child sexual abuse and exploitation. Although it is difficult to technologically detect and prevent the live streaming of child sexual abuse, the methods adopted by Yubo are a good example of what can be achieved by combining technology and human moderation.

Back to top