Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Internet Investigation Report

D.4: Preventing grooming

Industry

20. The Inquiry heard of various ways in which industry sought to prevent online grooming occurring.

21. Ms Kristie Canegallo, Vice President and Global Lead for Trust and Safety at Google, explained that YouTube now requires a user to accept an invitation to engage in a private conversation with another.[1] This gives users control over who they chat to and allows users to block approaches from someone they do not wish to be in contact with.

22. Mr Milward explained the parental controls available on Xbox. The set-up procedure specifically asks if the Xbox is going to be used by a child. If so, a main administrator can be designated giving them a level of control over the child’s account. Microsoft ensures that the administrator is an adult “by demanding various age verification which is required by law, and we ensure that it is in fact a parent by taking a small credit card payment”.[2] Where a child account is set up, various settings such as the live chat function are switched off by default and permission for access to such functions can only be granted by the adult administrator.[3]

Age verification

23. The Inquiry heard evidence that many social media and technology companies stipulate that, in respect of some of their platforms or services, users must be at least 13 years old. Facebook’s terms and conditions state that children under 13 cannot use Facebook.[4] The same applies to Kik.[5] In order to have a YouTube account, the user needs to be at least 13 years old.[6] Skype has no age limit but its “websites and software are not intended for or designed to attract users under the age of 13”.[7]

24. Mr John Carr OBE, who advises on matters of child internet safety, was asked how the age of 13 came to be the minimum age for subscription to online platforms and services. He explained that this requirement originated from evidence gathered in the US in the late 1990s in relation to marketing and advertising. The evidence suggested that 13 was the age at which a child could “decide for themselves whether or not to be part of an environment where those kinds of advertisements, commercial advertisements, would be present”.[8] Although this research was conducted before social media companies existed, the age limit has not changed.

25. In reality, the steps taken to ensure that users are at least 13 years old amount to no more than requiring the child to enter a date of birth which makes them at least 13. IN-A3 said that she opened a Facebook account when she was 12 because all her friends at school were on Facebook and that she could not now remember being told about the age limit.

I can’t remember if I lied about my age, but if I did lie about my age, think how simple that is, just to be able to put a different age, different year you was born and just being able to set up your account straight away.[9]

26. The NSPCC research for 2017/18 revealed that children aged 11 and under were victims of one-quarter of offences.[10] Mr Stower described it as:

astonishing … And I find the fact that children under 11 are being targeted … quite systematically by offenders here is something I don’t think the internet companies have yet got to grips with.[11]

27. The internet companies that gave evidence explained the ways in which they worked to detect underage users.

27.1. Ms de Bailliencourt said that, in her view, there was “no easy solution to implement age verification.[12] For example, she said, a requirement to present government ID cards or credit cards could exclude those who did not have them and would involve the processing of a substantial amount of information. She explained that Facebook’s reporting tool includes the ability to report a possible underage user but said that Facebook did not keep data on the number of underage reports made in respect of the UK because:

under COPPA,[13] Facebook is required to permanently wipe out any data potentially related to the account of a child under the age of 13 quite swiftly. So when we remove an account from the platform, we remove any associated data with this.[14]

Facebook had “started to look into artificial intelligence to help detect underage users.[15]

27.2. When asked whether Facebook was able to assure the public that children would not be able to open accounts if they were underage, Ms de Bailliencourt said “this is something that we all need to work on together”.[16] Similarly, when asked whether Facebook could guarantee that children would be safe from being groomed online, Ms de Bailliencourt said that this would be a “very difficult promise to make but that Facebook would “put the manpower and the technology that we have at our fingertips to make this as difficult as possible”.[17]

27.3. In relation to YouTube, Ms Canegallo said that if there are reasons to suspect a user is under 13 years old, for example where the user reveals their age,[18] YouTube requires the user to submit additional verification or it will terminate the account. YouTube “terminate thousands of accounts on a weekly basis for not passing that age verification process”.[19] When asked whether this signified that the process was inadequate in the first place, Ms Canegallo said that YouTube was “constantly looking to improve its age verification process while “looking to ensure that we are weighing those considerations of safety on the platform as well as privacy and data minimisation appropriately.[20]

28. The NCA was clear, however, that not enough was being done by social media platforms to ensure that users were at least 13 years old. Mr Robert Jones, Director of Threat Leadership for the NCA, said it was “absolutely pointless simply to rely on users declaring they were 13 years old if this was not then checked[21] because experience showed that this was “no defence in terms of preventing underage use”. He said there were a “viable set of measures which could be applied across the social media platforms as well”.[22] Mr Jones also said that the measures used to verify a child’s age for the purposes of the Report Remove initiative,[23] which may require the involvement of a parent or carer, were another model that could be considered.[24]

29. Mr Christian Papaleontiou, Head of the Home Office’s Tackling Exploitation and Abuse Unit, told us about a practical initiative taken by the social network, Yubo. Yubo partnered with Yoti (a digital identity provider) to use machine learning to detect whether website users are in the right age band for their platform.[25] He also described a recent 10-week study[26] by the Home Office and GCHQ to understand what more can be done to identify underage users. The study – which involved representatives from government, charities, academia, industry and law enforcement – found that at present no single “technical approach could accurately identify child users while protecting privacy and ensuring a “frictionless customer experience”.[27] However, “early product tests” conducted as part of the study revealed that a number of potential solutions “show promise”.[28]

30. In closing submissions, a number of core participants called for industry to adopt age verification as well as identity verification. It was said – on behalf of IN-A1, IN-A2 and IN-A3 – that age verification on social media platforms was required now to protect children from grooming as it was “not good enough to rely on self-certification.[29]

31. The NCA agreed that both age and identity verification were “vital in mitigating the online child abuse threat”, particularly for encrypted services and platforms as “it is one of the few things that can be done to mitigate” the difficulties that they posed to law enforcement.[30] As the NCA questioned:

Why, if you operate a service designed for children above a certain age, should you have any difficulty whatsoever in requiring children to establish their age when opening an account? … What is the legitimate and compelling reason for not doing so, that is sufficiently powerful to outweigh the child protection benefit?[31]

32. Based on the evidence we heard, the risk of being groomed online is particularly acute for children aged under 13 years old. It is plain that a more robust mechanism is required to verify the age of users than simply requiring them to declare their age on sign-up to a platform or service. The internet companies must also do more to identify users who are under 13 years old. As the Home Office and GCHQ study[32] reveals, there is much work still to be done before a practical technical solution to the problem can be achieved.

Education

33. Children who participated in the Inquiry’s ‘Learning about online sexual harm’ research[33] told the researchers that education focussed too much on “stereotypical ‘stranger danger’ images of perpetrators and abuse”.[34] In fact, where the secondary school aged children commented on the nature of online sexual harm they did so almost exclusively with reference to online approaches from unknown adults”.[35] In one of the focus groups conducted by the researchers, “every participant said they had met up with at least one person who they had initially met online, without an adult present, and showed little concern about having done so”.[36]

34. The research found that children wanted to learn more about the potential to be sexually abused online from people they knew, including their friends and peers. One 15-year-old female interviewee said:

Obviously they can tell you, ‘Don’t talk to strangers, don’t let strangers talk to you’, and stuff, but they should also talk about people that you know and trust, or you think you trust, because they might be more of, you might be more of a target to them because they think you trust them.[37]

35. The Department for Education’s draft statutory guidance Relationships Education, Relationships and Sex Education (RSE) and Health Education (February 2019)[38] states that, by the end of secondary school, pupils should know, amongst other topics, “the concepts of and laws relating to … grooming”.[39] This guidance will be compulsory in England from September 2020, with schools being encouraged to teach it from September 2019.

36. The guidance states that, before leaving primary school, children should know “that people sometimes behave differently online, including by pretending to be someone they are not[40] but there is no specific reference to primary school aged children being taught about grooming. One 14-year-old who was interviewed as part of the ‘Learning about online sexual harm’ research recounted that by the time she was in year 6 (10 to 11 years old) she was “already getting messages from random people and I didn’t know what to do”.[41]

37. The Department for Education will need to ensure that the guidance for primary school aged children sufficiently protects them from the dangers of being groomed online.

References

Back to top