Skip to main content

IICSA published its final Report in October 2022. This website was last updated in January 2023.

IICSA Independent Inquiry into Child Sexual Abuse

The Report of the Independent Inquiry into Child Sexual Abuse

Final report

J.5: The Online Safety Bill

28. In March 2022, the Online Safety Bill was laid before Parliament.[1] When enacted, the Online Safety Bill will mean that companies which host user-generated content and search engines will be regulated for the first time and will have duties of care to users. Those duties include “to mitigate and effectively manage” the risk of harm caused by illegal content and to protect children from harm on those parts of the service which children can access.[2] The definition of illegal content covers child sexual exploitation and abuse content.

29. In summary, the duties of care imposed by the legislation require:

  • all providers to conduct risk assessments relating to illegal content;
  • where the service can be accessed by children, providers to conduct “children’s risk assessments” (which must be kept up to date and be updated before the service makes a significant change) and to protect children’s safety online;
  • providers to use “proportionate systems and processes” designed to minimise the presence of illegal content on the service in the first place; where the illegal content is uploaded, to minimise the time for which it is present and its dissemination, and to remove the illegal content they are made aware of, or become aware of, as soon as possible.[3]

30. The proposed legislation does not prescribe the use of specific technologies in order to prevent harm taking place but instead makes clear that companies should be using technology to address these harms. The Home Office’s Interim Code sets out the voluntary action the government expects providers to take before the regulator is established. The Interim Code states that companies “should take reasonable steps” to seek to prevent known child sexual abuse material being made available or accessible on their platforms.[4] The Office of Communications (Ofcom), as the regulator, will be able to require a company to use such technology where other measures do not work. The Interim Code also states that companies should “proactively identify” newly generated child sexual abuse material and identify and combat child sexual exploitation and abuse activity such as grooming.[5] The drafting of any future code should make clear, for the avoidance of doubt, that aspects of the companies’ response to online child sexual exploitation and abuse are mandatory.

31. Ofcom will have a range of enforcement powers to tackle non-compliance, including the power to impose a fine of up to £18 million or 10 percent of global annual turnover (whichever is the higher). In addition, it will have powers to require additional information from companies under investigation, to enter companies’ premises and access documentation, data and equipment, and to interview companies’ employees.

32. In addition to the Bill, there are plans for action by the government and Ofcom on education. The government’s 2021 Child Sexual Abuse Strategy states that the Home Office “will work to deter individuals from abusive behaviour, investing in evidence-based public education campaigns that can prevent offending”.[6] Ofcom is expected to promote education about online safety and the use of safety technologies to empower users and tackle online harms. The detailed plans for this, including to evaluate the efficacy of Ofcom’s activity, are not yet known.

33. The Department for Digital, Culture, Media & Sport (DCMS) also published an official ‘one-stop shop’ containing guidance for businesses on keeping children safe online. This came after research had shown that smaller companies were less confident in their ability to find information on online child safety than bigger businesses. DCMS noted that the new guidance was separate from the forthcoming regulations in the Online Safety Bill.[7] The guidance provided advice on data protection and privacy, delivering age-appropriate content, addressing harmful conduct and protecting children from online sexual exploitation and abuse. It also recommended applying and enforcing minimum age limits.

34. The UK is not the only country moving towards regulation of online service providers.

34.1. In Australia, the Online Safety Act 2021 (which came into force in January 2022) requires service providers to take reasonable steps to minimise access to child sexual exploitation material. The Act also requires the development of new mandatory industry codes to regulate illegal and restricted content. The codes require online platforms and service providers to detect and remove illegal content such as child sexual abuse material.

34.2. Ireland is proposing to introduce a regulatory regime for online safety in its Online Safety and Media Regulation Bill.

34.3. The European Union’s (EU) Digital Services legislation, when brought into force, will improve mechanisms for the removal of illegal content online in States which are EU members.[8] In May 2022, the EU also announced that it was proposing legislation to make it mandatory for companies to “detect, report and remove child sexual abuse material on their services”.[9]

35. Regulation in other areas and industries, for example by the Health and Safety Executive, has led to safer working conditions and has improved good working practices across a vast range of businesses. Taken as a whole, the government’s proposals to introduce a regulatory regime in respect of online child sexual abuse and exploitation are welcome. However, concerns remain about the ability of the proposed legislation to address issues surrounding age and identity verification.

Age verification

36. Protecting children from online child sexual abuse is not the responsibility of a single institution. A multifaceted, collaborative approach is required which prevents harmful content being available, removes it when it becomes known, denies underage children access to services and platforms, and makes sites and platforms safe from the outset.

37. The Interim Code states companies should use “safety by design approaches” and “adopt enhanced safety measures with the aim of protecting children”. The Code envisages that reasonable steps include companies having “safety processes and default settings that are appropriate to the actual age of their users and that make provision for the possibility of underage users”.[10]

38. The vast majority of participants in the Inquiry’s Engagement with Children and Young People project stated that they had accessed social media apps before the minimum age requirement. They noted that there were few restrictions in place to prevent this and expressed surprise that technology was not being used to strengthen identity verification controls.[11]

39. The Inquiry’s Internet investigation highlighted the harm being done to young children by online-facilitated sexual abuse. For children aged under 13, the risk of being groomed online was “particularly acute”.[12] Those aged 11 to 13 also featured prominently in images and videos of live streamed child sexual abuse that were analysed by the IWF.[13]

40. As noted in Part F, technological developments that help detect online-facilitated child sexual abuse are welcome. However, they fail to deal with the Inquiry’s fundamental concern that children under the age of 13 are able to access social media platforms and services by too easily evading overly simplistic age verification requirements (processes that ensure users must prove their age to access certain platforms). Many services require nothing more than entering a date of birth.

Age verification in the context of access to adult pornographic material

41. Repeated exposure to self-generated sexual images of children, and to pornography, has in some instances led to desensitisation – where such incidents become accepted as everyday life and so are less likely to be reported.[14] Children who participated in the Inquiry’s research for Learning about Online Sexual Harm identified exposure to pornography as being one of a number of examples of online sexual harm.[15] Evidence heard in the Residential Schools investigation suggested that it would not be unusual for a young person with autism who had accessed a highly inappropriate pornographic website not to understand why the site should be censored.[16]

42. Chief Constable Simon Bailey, at that time the National Police Chiefs’ Council Lead for Child Protection and Abuse Investigations and now retired, considered that the availability of pornography was:

creating a group of men who will look at pornography and the pornography gets harder and harder and harder, to the point where they are simply getting no sexual stimulation from it at all, so the next click is child abuse imagery. This is a real problem. It really worries me that children who should not be being able to access that material … are being led to believe this is what a normal relationship looks like and this is normal activity”.[17]

43. These views echo the findings of the government’s Equalities Office, which concluded that there was “substantial evidence of an association” between the use of pornography and harmful attitudes and behaviours towards women and girls.[18] Young victims and survivors, and support service organisations, told the Inquiry that online pornography had particularly normalised and promoted violent sex and rape fantasies.[19]

44. In 2016, the government proposed legislation (the Digital Economy Act) that restricted access to pornographic websites to those aged 18 or over. However, in 2019, the government decided not to implement that part of the Digital Economy Act, stating that:

It is important that our policy aims and our overall policy on protecting children from online harms are developed coherently … this objective of coherence will be best achieved through our wider online harms proposals”.[20]

45. In February 2022, the government announced a “new standalone provision” to the Online Safety Bill requiring “providers who publish or place pornographic content on their services to prevent children from accessing that content”.[21] This provision is intended to capture commercial providers of pornography in addition to sites that allow user-generated content that were already within the scope of the Bill.[22] The announcement specifically stated that preventing access “could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data”. It added that age verification technologies:

  • do not require “a full identity check”;
  • must be “secure, effective and privacy-preserving”; and
  • are increasingly common practice in other online sectors, including online gambling and age-restricted sales.[23]

46. While the Bill refers to age verification as an example of how a provider might prevent children encountering pornographic content, it does not mandate age verification.

47. Instead, it imposes the duties of care and it will be for the company to decide how to comply with those duties. Age verification is only one measure a company may use to demonstrate to Ofcom that they can fulfil their duty of care and prevent children accessing pornography. The government states that the Bill “does not mandate the use of specific solutions as it is vital that it is flexible to allow for innovation and the development and use of more effective technology in the future”.[24]

48. However welcome it is to prevent children from accessing pornography, the draft Bill does not expressly go far enough to protect children from the potential harm caused by accessing social media platforms and services on which they can be groomed and sexually abused. The current age verification requirements provide insufficient protection to young children, particularly those aged under 13. Where access to a service or platform is age restricted, those restrictions must be robustly enforced. The Inquiry therefore reiterates its recommendation that stronger age verification techniques be required.

Recommendation 20: Age verification

The Inquiry recommends (as originally stated in its The Internet Investigation Report, dated March 2020) that the UK government introduces legislation requiring providers of online services and social media platforms to implement more stringent age verification measures.

Identity verification

49. In addition to concerns about age verification, there are also concerns about the ease with which offenders adopt a false identity in order to carry out their abuse.

49.1. The Internet Investigation heard how 13-year-old IN-A1 and her 12-year-old brother, IN-A2, were groomed online by a 57-year-old man who initially pretended to be a 22-year-old woman named ‘Susan’. His grooming of IN-A1 was such that when ‘Susan’ revealed he was a man, IN-A1 was not able to break contact with him. He made IN-A2 sexually touch IN-A1 and suggested that IN-A2 should have sexual intercourse with her. He made IN-A1 commit sexual acts for him over webcam.[25]

49.2. In 2021, David Wilson was sentenced to 28 years’ imprisonment for 96 child sexual abuse offences against 52 boys aged 4 to 14 years old. The offending occurred between May 2016 and April 2020, when Wilson approached more than 5,000 boys worldwide by adopting personas of teenage girls – principally on Facebook – and blackmailed some victims into abusing younger siblings or friends and sending him the footage. Using unregistered phones, Wilson scoured social media sites for vulnerable victims. Over the course of the investigation, Facebook made numerous referrals to the National Centre for Missing and Exploited Children (NCMEC), including identifying 20 accounts of boys aged 12 to 15 years who had sent indecent images of themselves to an account seemingly belonging to a 13-year-old girl (Wilson).[26]

50. Given that many platforms do not require a user’s identity to be verified, it is all too easy for perpetrators to set up a fake online profile (also known as catfishing) which enables them to masquerade as a child or as someone else.

51. While data protection considerations about handling large volumes of personal data will need to be taken into account, many of the young victims and survivors told the Engagement team that they were “surprised that technology is not being used to strengthen identity verification controls”. They referred, understandably, to the ease with which children can access online sites when compared with how difficult it can be for an adult to get into their own banking app.[27]

52. The Online Safety Bill includes measures which would provide adults (but seemingly not children) with the option not to interact with unverified users. Ofcom may therefore wish to give more consideration to identity verification when it publishes its code of practice for child sexual exploitation and abuse.

53. It is unrealistic, however, to assume that age and identity verification will, by themselves, prevent underage access to internet services and platforms and protect children from online harm. There needs to be increased emphasis and focus on making children’s use of the internet safer by design and, once established, all platforms and services need to have the capacity and capability to respond to emerging patterns of child sexual abuse. The new Child Protection Authorities will play an important role in helping to provide advice on these and other developing challenges in the future.

Back to top