top of page

UK Online Safety Act and End-to-End Encryption

  • Writer: Joseph Watt
    Joseph Watt
  • Dec 15, 2024
  • 7 min read

Updated: Apr 7, 2025

Phone home screen with WhatsApp, Facebook, Instagram and Snapchat icons.
Credit: Author

On October 26 2023, the Online Safety Act, first published in May 2021 with the promise to ‘make the UK the safest place in the world to be online’, became law by receiving Royal Assent.[1] This marks the end of a notably tumultuous public and parliamentary history, and now begins another precarious phase of implementation.


Much public debate has surrounded the Online Safety Act where discussions have become increasingly, unhelpfully polarised. Those defending the bill are seen as totalitarians, ushering in a new age of mass surveillance. Any laws including mandates for the detection, blocking and reporting of child sexual abuse material is condemned as a ‘trojan horse’: legislation disguised as child-protective that is actually intended to smuggle in government-enforced censorship.


Whereas those anxious over the bill’s passage, feeling unease over laws that could encroach on privacy, fearing that mission creep might lead to the censorship of marginalised groups, are labelled pro-child abuse internet trolls. Their concerns are too often dismissed as unempathetic, sensationalist and fearmongering.


These responses serve only to alienate concerned parties whilst oversimplifying online safety and global child protection. When these narratives are fed, legislation is incorrectly presented as a binary choice between privacy and protection. Both principles are of vital importance. We absolutely can and should safeguard tools for technological privacy whilst preventing sex offenders from operating online.


We must clearly understand the necessity of acting now on a global scale, to disrupt online sexually abusive behaviours targeting children whilst appropriately addressing the legitimate fears of those opposing legislation.


In 2023 the National Crime Agency estimated that between 680,000-830,000 adults in the UK pose some degree of risk to children online.[2]


In the Philippines, online sexual exploitation of children is rampant, fuelled, in substantial part, by money from offenders in the UK. The Scale of Harm study, released in 2023 by International Justice Mission and the University of Nottingham Rights Lab, found that nearly half a million, or 1 in every 100, Filipino children were sexually abused to create child sexual exploitation material sold to offenders across the globe in 2022 alone.[3]


In this crime, children are sexually exploited by traffickers, most often a parent or close relative, who then spread or sell images and videos of the abuse online – often live-streaming it for sex offenders to direct from anywhere in the world.


Unsurprisingly this form of online sexual exploitation thrived during the COVID-19 pandemic, both demand for, and instances of, child sexual abuse materials grew exponentially. The Philippines is now recognised as a global hotspot, whilst the UK consistently places highly in terms of demand. The Anti-Money Laundering Council released a study in April 2023 ranking the UK as second behind only the USA by amount and value of suspicious financial transactions made to the Philippines directly associated with online sexual exploitation of children.[4]


UK legislative decisions have very real implications for the welfare of children across the globe, our demand feeds global supply which means thousands more children in the Philippines exposed to extensive, long-lasting harm.


These alarming statistics compel strong responses from the UK government and electronic service providers, scaling-up the detection, blocking and reporting of child sexual abuse materials. According to WeProtect Global Alliance’s 2023 Global Threat Assessment, the selling and spread of such materials often occurs on the surface of the internet, on the social media platforms and messaging services we use every day.[5]


Currently, prior to the ensuing incremental rollout of the Online Safety Act, electronic service providers have no legal obligation to actively search for and report the child sexual abuse materials that we know plague their platforms. Despite this, providers do take their responsibilities seriously; in 2022 over 90% of all reports suspecting online sexual exploitation of children sent to the National Center for Missing and Exploited Children (NCMEC) in the USA came from Facebook, Instagram, Google, WhatsApp and Omegle.[6]


Still, detecting, blocking and reporting must become a universal mandate otherwise we risk creating safe online spaces for child sexual abuse to flourish, particularly with the increasing rollout of end-to-end encryption which, if improperly implemented, poses a significant threat to children.


Most public debate is centred on this point. Chapter 5 of the Online Safety Act outlines that Ofcom, the independent regulator assigned to enforce the bill, can require electronic services, including those employing end-to-end encryption, to use "accredited technology" to identify, block and report child sexual abuse material.[7]


Online privacy is incredibly important for everyone, particularly victims of online sexual exploitation. Insecure tech platforms risk breaches of sensitive personal information, allowing children to be identified and located more easily. End-to-end encryption ensures that messages can only be seen by the end-point sender and receiver, not even the service provider holds a key to decrypt communications.


No thoughtful, child-protective legislation should seek to remove end-to-end encryption, and previous calls for the inclusion of ‘backdoors’ in encryption, allowing exceptional access to governments or law enforcement, should be criticised since these could likely be exploited.


However, without implementing effective child protection measures, end-to-end encryption makes it virtually impossible to scan for and report child sexual abuse material. Referencing Meta’s recent statement intending to rollout end-to-end encrypted messaging by default, the NCA estimated that, without proper detection and prevention technology in place, between 85% and 92% of all reportable cases could be lost.[8]


We are being presented with a false choice between privacy and protection. It is entirely feasible and realistic to expect the implementation of safety features that do not break encryption.


Though no exact methods for identifying child sexual abuse content are explicitly referred to in the Online Safety Act, there are several viable options already available, requiring varying degrees of continued research and development. Such examples include homomorphic encryption, enabling matching image detection on encrypted data, and the use of secure enclaves, closed-off environments at the server-level where scanning takes place to detect adversarial content.


The current solution garnering most attention is client-side scanning. Here, a unique alphanumeric fingerprint, or hash, assigned to an individual image or video is scanned on the user’s device when uploaded to the server, before it is sent and encrypted. This hash is compared against a database containing hashes of known targeted content, if a match occurs, the message is flagged for manual review by analysts, in the UK, likely from the Internet Watch Foundation or a functionally similar organisation.


Client-side scanning is already employed as standard by electronic service providers to protect users from viruses and malware, shouldn’t we also be protecting users from child sexual abuse?


Some argue, however, that client-side scanning, though operational without breaking encryption, renders end-to-end encryption nonetheless ‘moot’, that any form of content moderation unacceptably infringes on civil privacies. The issue raised here is with the concept of scanning in general, the most frequent objections being fears over mission creep. Put simply, though we might now guarantee that communications will only be scanned for known instances of child sexual abuse, will we be able to prevent future scanning from expanding to include other targeted content, such as material that is merely displeasing to the government of the day?


This is a legitimate concern that requires constant, careful monitoring under a clear, rigorous system of checks and balances. It is true that any method of content moderation has the potential to be used as a tool to restrict free speech if left sufficiently unscrutinised. It’s scarily possible to imagine certain authoritarian regimes using forms of scanned content moderation to detect and punish material out of pace with state-controlled messaging. We must be mindful of the potentially grave consequences of mission creep, ensuring that we maintain transparency around monitored adversarial content.


Dr. Ian Levy and Crispin Robinson, directorial voices from two key UK national cyber security agencies, promise that these concerns can be effectively mitigated by ensuring the integrity of databases containing targeted content. These databases should be developed and maintained outside governmental arenas, publicly publish information embodying the state of each database and require regular, comprehensive third-party audits.[9] This way we can guarantee that only agreed-upon adversarial content is scanned for.


We can moderate against mission creep. If we allow this fear to prevent action, we are prioritising hypothetical future risks whilst dismissing real, widespread harms: children currently being exploited by offenders to produce and distribute images and videos, even livestreamed abuse. By not acting now to strengthen and enforce effective legislation, we continue to provide hospitable cyber spaces for children to be exploited abused.


Concerns have also legitimately been raised over the potential for false positives, innocent messages being flagged and reported, leading to unfounded, harmful interventions. Yet these can largely be allayed by providing clear moderation and reporting processes that include multiple, manual independent checks prior to any law enforcement referral.


With these in mind, it seems reasonable that, if future Ofcom accredited technology can be implemented with sufficiently low rates of false positives, and if we can be assured that it really is just child sexual abuse materials and indicators being scanned for, then we should feel confident supporting online safety legislation.


We cannot allow abusive content to be shared freely on the internet without consequence. The scale of acute harm felt by children around the world won’t allow it.

Cassie, a founding member of the Philippine Survivor Network who was trafficked into online sexual exploitation at the age of 12 and abused over five years, recently spoke at a UK parliamentary event on March 12, 2024:


‘As we sit here today, there are men in this country who are paying for children to be abused for them to watch live online. They are watching them be stripped of their dignity and abused for their own satisfaction – just like they watched me.’

 

We have a duty of care to defend children like Cassie.

 

‘I’m determined that it will stop and that children like me will be protected. The UK government, Ofcom and UK authorities all have a key role in making this happen.’[10]


[1] Department for Science, Innovation and Technology and Rt. Hon. Michelle Donelan MP, (27 October 2023), GOV.UK; https://www.gov.uk/government/news/overwhelming-support-for-online-safety-act-as-rules-making-uk-the-safest-place-in-the-world-to-be-online-become-law

[2] National Crime Agency (NCA), (2023), National Strategic Assessment 2023 for Serious and Organised Crime

[3] International Justice Mission and University of Nottingham Rights Lab, (2023), Scale of Harm Research Method, Findings and Recommendations: Estimating the Prevalence of Trafficking to Produce Child Sexual Exploitation Material in the Philippines, International Justice Mission, p.11.

[4] Anti-Money Laundering Council (AMLC), (April 2023), Online Sexual Abuse and Exploitation of Children in the Philippines: an Evaluation using STR Data (July 2020 – December 2022), pp.21-22.

[5] WeProtect Global Alliance, (2023), Global Threat Assessment 2023, pp.26-28.

[6] National Center for Missing & Exploited Children (NCMEC), (2023), 2022 CyberTipline Reports by Electronic Service Providers (ESP).  

[7] Online Safety Act 2023, (26 October 2023), ch.5, 121.

[8] NCA, (2023), NCA response to Meta’s rollout of end-to-end-encryption;   https://www.nationalcrimeagency.gov.uk/news/nca-response-to-meta-s-rollout-of-end-to-end-encryption

[9] Dr. Ian Levy & Crispin Robinson, (21 July 2022), Thoughts on Child Safety on Commodity Platforms, pp.44-46

[10] International Justice Mission, (2024), MPs, campaigners, survivors and UK NCA urge action to tackle growth in livestreamed child abuse; https://www.ijmuk.org/stories/mps-campaigners-survivors-and-uk-nca-urge-action-to-tackle-growth-in-livestreamed-child-abuse

 
 
 

Comments


Subscribe here to get my latest posts

Thanks for submitting!

© 2024 by Joseph Watt. 

bottom of page