Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Internet Investigation Report

D.5: Detection

Law enforcement

38. The law enforcement response to online grooming initially lagged behind the response to the viewing and distribution of child sexual abuse imagery. In 2016, law enforcement acknowledged that:

The police approach to targeting those who abuse children online has been disproportionately directed to those accessing indecent imagery. Comparatively little resource has been directed towards grooming which arguably represents a greater threat to children.[1]

39. Mr Keith Niven, Deputy Director Support to the NCA’s Child Exploitation and Online Protection Centre, said that policing was “very focused[2] on grooming and that law enforcement “proactively deploys sensitive techniques to detect online grooming.[3] These techniques include officers operating in internet chatrooms and forums used by suspected offenders.[4] Chief Constable Simon Bailey, the National Police Chiefs’ Council (NPCC) Lead for Child Protection and Abuse Investigations, explained that “dedicated trained specialists” were used “to interact with offenders online”.[5]

40. In 2017, the Police Transformation Fund (PTF)[6] awarded £20.39 million over three years to enable regional organised crime units (ROCUs) to increase their undercover online (UCOL) capabilities.[7] Mr Papaleontiou described UCOL work as “critically important in terms of bearing down on grooming.[8] In September 2018, the Home Secretary announced a further £4.6 million to support UCOL work in the ROCUs.[9]

41. It is clear that the scale of the law enforcement response to online grooming has increased in a short period of time. However, as we consider in the next section of this report, the Inquiry also heard criticisms of the law enforcement response.

Online child abuse activist groups

42. Dark Justice is an online organisation which aims to uncover those who groom children over the internet. Its founders pose as children, on platforms such as Facebook and Snapchat, by setting up a decoy profile. The decoy profile makes clear that the person is a child. When the offender sexualises the communication and arranges to meet the ‘child’ in person, Dark Justice films the encounter. Dark Justice then contacts the police and provides the police with records of the offending.

43. Dark Justice said that they were seeking to assist the police “in an area where they do not have the expertise, understanding or resources to act properly or at all, to protect children from sexual abuse”.[10] Dark Justice gave an example where they were told (by a parent) that the police had been unable to trace an online groomer but that “[w]ithin 15 minutes” they were able to ascertain a name and address for the person and pass those details to the police.[11] They also said they had assisted in the arrests of 165 people of whom 96 were convicted.

44. Chief Constable Bailey told the Inquiry that the police did not support working with online child abuse activist groups “for a significant number of reasons.[12] His “greatest fear[13] was that the operations of these groups were mounted without due regard to safeguarding risks to suspects and their families, including any children. He had concerns about whether the investigations had been conducted properly, about the quality of the evidence that these groups collected and he told us of instances where the suspects had been blackmailed or assaulted. Chief Constable Bailey gave an example where an online child abuse activist group had live-streamed their confrontation with a man accused of trying to meet a 14-year-old child.[14] The man denied the allegation, saying that he thought he was meeting a 48-year-old woman. The man was verbally abused by a neighbour who had seen the broadcast, and later that same day took his own life. The police reviewed the evidence provided by the online child activist group and found:

no evidence to suggest that the male thought that he was meeting a 14 year old child … there was nothing to show that they had said that they were 14 years of age”.[15]

45. When asked whether (as suggested by Dark Justice) he envisaged there could be a framework or agreement so that police could use the resources of such groups while avoiding safeguarding risks, Chief Constable Bailey answered “genuinely – I don’t.[16] He defended the law enforcement response to online grooming:

over 400 people being arrested every month, month after month, after month … to say that we don’t have the expertise, the skills, the capacity, quite frankly, I just think is misleading and it’s not true”.[17]

Industry

46. In the Serious and Organised Crime Strategy 2018, the government expressed a clear and unqualified expectation of what technology companies must do about online grooming:

companies must stop online grooming taking place on their platforms.[18]

47. Companies use a variety of techniques to detect grooming.

Moderators

48. Between February 2018 and May 2019, Facebook doubled its number of moderators (referred to by Facebook as ‘content reviewers’) from 7,500 to 15,000 reviewers worldwide.[19] The moderators review content and take action where there has been a breach of Facebook’s ‘Community Standards’. The Community Standards cover a wide range of content and include a policy on ‘child nudity and sexual exploitation of children’.

49. When asked why the number of reviewers was increased, Ms de Bailliencourt said:

I don’t think there was anything specifically that triggered this particular investment, I think the company, as a whole, is incredibly dedicated to making sure that we have the right amount of people able to review content … [20]

She was not aware if there were plans to increase the numbers of moderators throughout 2019 into 2020.[21]

50. When asked how Facebook knew whether 15,000 moderators was enough, Ms de Bailliencourt said:

When I speak to experts in this area, they often focus really on the number of people. We don’t tend to look at it this way, we tend to think of the speed of our response and the adequacy of our response. We do this by using automation, machine learning, AI, as well as people … If we had reasons to believe that we were lagging behind or not good enough or taking too long to respond to a particular challenge, this is where I have seen investment in new teams, new technology, new expertise brought in on certain topics.[22]

51. Mr Milward did not provide the Inquiry with the number of moderators employed by Microsoft:[23]

we would rather keep that information private. It is in the tens, not in the hundreds or in the thousands, and bear in mind that this is the team that reviews content to determine whether it is child sexual abuse material or not. This is not the limit to the resources that are placed on tackling this whole issue, which, again, is in the thousands.[24]

52. In December 2017, Google announced that by 2018 it aimed to have over 10,000 people working on content that might violate Google’s policies.[25] Ms Canegallo could not state the number of reviewers prior to the increase. When asked what prompted this increase, she said:

I think it was a – a natural reflection of the priority that we, at Google, place … in ensuring that users are having a safe experience and that we’re being a responsible platform. So as online and off-line harms proliferate, it is natural that that responsibility necessitates Google to increase our investment in this area … [26]

53. The increase in the number of people employed by Google and Facebook to review content, including child sexual abuse content, is significant but it is still unclear if the increase is enough. Industry needs to demonstrate a better understanding of the scale of the child sexual abuse imagery and grooming on their services and products. It is only once this data is known that the adequacy of resources (in terms of developing technology and employing sufficient numbers of human reviewers or moderators specifically focused on child sexual abuse and exploitation) can be assessed.

Technological methods of detection

54. Ms de Bailliencourt explained that, in 2012, Facebook realised that given the “high probability that a child would not report being groomed, it needed to take a further step to “identify this type of behaviour regardless of a user report”.[27] Since then, Facebook had been “working hard[28] to improve its detection mechanism. The technology had developed from a “quite rudimentary” state in 2012, to what is now a “behavioural classifier” involving “quite sophisticated pattern recognition rather than simply key word flagging detection” to detect grooming.[29] The behavioural classifier looks at “patterns of behaviour that may indicate that someone is trying to approach, or behaves in a predatorial way towards children on the platform”.[30] Where grooming is detected, the matter is reported to the National Center for Missing & Exploited Children (NCMEC) or, where necessary, directly to law enforcement.

55. Mr Milward said that Microsoft uses “real time moderation technologies on XBox Live to detect grooming.[31] Conversations over XBox Live are public communications and are not encrypted. Microsoft will:

dip in and out of a whole variety of these conversations to check on language being used … and then, equally, we will look for indications that there might be grooming taking place”.[32]

56. A “level of automation was applied to detect indications that grooming may be occurring.[33] This might be combinations of words to indicate that somebody is trying to take a public conversation into a private forum: for example, ‘are your parents around?’ or ‘do you have a number I can call you on?’ Where potential grooming is detected, the “intention is that the live chat stops, a warning message appears, the account of the potential groomer is suspended, and human moderators investigate.[34] Mr Milward acknowledged, however, that Microsoft:

already know about instances where there has been grooming taking place on Xbox Live and has transferred to other platforms. So it’s not perfect. Without a doubt, there’s work to do on this.[35]

57. Google’s “comments classifier uses machine learning to detect potential grooming in comments on YouTube videos and brings them to the attention of a human moderator for review.[36] The classifier is an automated system that looks for “potentially inappropriate comments”, captures them, removes them and if necessary reports them to NCMEC.

58. In November 2018, the Home Secretary convened a ‘hackathon’. Hosted by Microsoft, engineers from leading technology companies including Microsoft, Facebook and Google worked for two days to analyse tens of thousands of conversations to understand patterns used by online groomers. This enabled engineers to develop a prototype that could potentially be used to flag conversations that might be indicative of grooming.

59. Mr Milward described the hackathon as a “significant brainstorming resulting in an engineering solution.[37] The prototype was improved following a second, mini-hackathon in May 2019, and it was put into live testing with three companies. At the May 2019 public hearing, Mr Milward said the testing was reporting “very strong accuracy[38] and that it was a matter of “months” rather than years for the prototype to be finalised and deployed.[39] In January 2020, Microsoft announced the launch of this technology. Known as Project Artemis, the technology will be licensed free of charge to smaller and medium-sized technology companies worldwide.[40]

60. While acknowledging the useful work on the prototype, Mr Papaleontiou of the Home Office emphasised the need for follow-up. He said the Home Office:

will be continuing to engage closely with industry and partners in terms of making sure that good intentions and a good prototype actually manifests itself in a product that delivers real world tangible benefits to those we are focused on protecting.[41]

61. Mr Jones of the NCA emphasised the need for industry to implement the measures that it had developed:

the real challenge for this type of event – you know, what’s not to like about very clever people in Silicon Valley coming together and writing code to detect child abuse? Brilliant. What we need is the delivery and prevention of that offending and we are not seeing that at the pace that we should.[42]

He described frustrations that very positive measures taken by some smaller companies to tackle online grooming had not been adopted by bigger organisations. For example, Mr Jones told us about a company called Jagex which developed “sophisticated[43] technology that can identify potentially inappropriate communication between users within its online gaming community. Where there is such communication, players receive a live pop-up advising them that a conversation is inappropriate. He said that “over 87 percent[44] of the players who received the pop-up modified their behaviour. Mr Jones said he struggled to understand why bigger companies had not followed suit, despite efforts by the NCA to highlight and promote the innovation.[45]

62. Ms Canegallo explained that online grooming is not always easy to detect. She said that “grooming could begin with interaction that seems innocuous or comments that, without clear understanding of the intent … could not raise suspicion”.[46] It can happen both online and offline across many platforms “in a way where it may not be clear the individual’s age”.[47] When asked about media reports of grooming on YouTube, Google pointed (among other things) to having “dramatically improved its comments classifier.[48]

63. In light of the NSPCC research revealing approximately two incidents of grooming a day on Facebook in the UK,[49] Ms de Bailliencourt was asked about the adequacy of Facebook’s response to online grooming. Ms de Bailliencourt said that Facebook “have invested and are and will continue to invest a huge amount in its response to online grooming and took its responsibility seriously.[50]

64. The results of the 2018 hackathon show how much can be achieved, in a short space of time, when government takes the lead and internet companies collaborate with one another.

65. Given the evidence of the scale of online grooming, industry’s response will necessarily involve the increased development and use of technology. Companies will also need to ensure that there are sufficient numbers of human moderators to follow up on potential instances of online grooming identified by those technologies.

References

Back to top