Ofcom Publish New Online UK Internet Safety Codes of Practice

Ofcom has today published the “first edition” of its new online UK safety codes of practice for “tech firms” (e.g. social media) and smaller websites under the government’s Online Safety Act (OSA). Providers now have 3 months to ensure they’re able to tackle “illegal harms” (content), such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide.

On the surface, it all sounds sensible and well-intentioned. After all, it’s widely understood, and few could disagree, that the old model of self-regulation has struggled to keep pace with the changing online world, which has allowed far too much “harmful” content to slip through a fairly weak net.

NOTE: The Act and its codes are far-reaching and will touch many websites and online services (big and small alike). But it’s also true to say that Ofcom lacks the resources to monitor everything, thus their focus is likely to be fixed on the worst offenders and major social media firms.

The new Act essentially responds to this by placing new safety duties on social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites of all sizes. Failing to comply with these rules could be extremely costly: “We have the power to fine companies up to £18m or 10% of their qualifying worldwide revenue – whichever is greater,” said Ofcom.

In “very serious cases” they can also apply for a court order to have broadband ISPs and mobile operators block a website or service in the UK.

Types of harmful content

The Online Safety Act lists over 130 ‘priority offences’, and tech firms must assess and mitigate the risk of these occurring on their platforms. The priority offences can be split into the following categories:

Terrorism
Harassment, stalking, threats and abuse offences
Coercive and controlling behaviour
Hate offences
Intimate image abuse
Extreme pornography
Child sexual exploitation and abuse
Sexual exploitation of adults
Unlawful immigration
Human trafficking
Fraud and financial offences
Proceeds of crime
Assisting or encouraging suicide
Drugs and psychoactive substances
Weapons offences (knives, firearms, and other weapons)
Foreign interference
Animal welfare

However, trying to strike the right balance between Freedom of Expression, individual Privacy and outright Censorship is a difficult thing to get right, particularly when attempting to police the common and highly subjective public expression of negative human thought. Not to mention complex issues of context (e.g. people joking about blowing up a city vs actual terrorists), parody and political speech. Humans often get it wrong, and automated filtering systems are even worse. But only time will tell whether the pros of the new approach are enough to outweigh the potential cons (e.g. overblocking of legal content that is mischaracterised).

Who the rules apply to

All in-scope services with a significant number of UK users, or targeting the UK market, are covered by the new rules, regardless of where they are based.

The rules apply to services that are made available over the internet (or ‘online services’). This might be a website, app or another type of platform. If you or your business provides an online service, then the rules might apply to you.

Specifically, the rules cover services where:

  • people may encounter content (like images, videos, messages or comments), that has been generated, uploaded or shared by other users. Among other things, this includes private messaging, and services that allow users to upload, generate or share pornographic content. The Act calls these ‘user-to-user services’;
  • people can search other websites or databases (‘search services’); or
  • you or your business publish or display pornographic content.

To give a few examples, a ‘user-to-user’ service could be:

  • a social media site or app;
  • a photo- or video-sharing service;
  • a chat or instant messaging service, like a dating app; or
  • an online or mobile gaming service.

The rules apply to organisations big and small, from large and well-resourced companies to very small ‘micro-businesses’. They also apply to individuals who run an online service.

It doesn’t matter where you or your business is based. The new rules will apply to you (or your business) if the service you provide has a significant number of users in the UK, or if the UK is a target market.

The first step in implementing all this sees Ofcom giving in-scope providers three months to complete “illegal harms risk assessments“. Every site and app in scope of the new laws thus has from today until 16th March 2025 to complete an assessment to understand the risks illegal content poses to children and adults on their platform.

Subject to their codes completing the Parliamentary process by the above date, from 17th March 2025, sites and apps will then need to start implementing safety measures to mitigate those risks (e.g. effective moderation that can identify and remove “harmful” content), and Ofcom’s codes set out measures they can take. Some of these measures apply to all sites and apps, and others to larger or riskier platforms.

Dame Melanie Dawes, Ofcom’s CEO, said:

“For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today.

The safety spotlight is now firmly on tech firms and it’s time for them to act. We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year.

Those that come up short can expect Ofcom to use the full extent of our enforcement powers against them.”

Peter Kyle MP, UK Technology Secretary, said:

“This government is determined to build a safer online world, where people can access its immense benefits and opportunities without being exposed to a lawless environment of harmful content.

Today we have taken a significant step on this journey. Ofcom’s illegal content codes are a material step change in online safety meaning that from March, platforms will have to proactively take down terrorist material, child and intimate image abuse, and a host of other illegal content, bridging the gap between the laws which protect us in the offline and the online world. If platforms fail to step up the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites.

These laws mark a fundamental re-set in society’s expectations of technology companies. I expect them to deliver and will be watching closely to make sure they do.”

The Act also enables Ofcom, where they “decide it is necessary and proportionate“, to make a provider use (or in some cases develop) a specific technology (this must be accredited by Ofcom or someone they appoint) to tackle child sexual abuse or terrorism content on their sites and apps. The regulator are consulting today on parts of the framework that will underpin this power.

Otherwise, the first set of codes and guidance sets up the enforceable regime, although Ofcom are already working towards an additional consultation on further codes measures in Spring 2025. This will include proposals in the following areas:

  • blocking the accounts of those found to have shared CSAM (Child Sexual Abuse Material);
  • use of AI to tackle illegal harms, including CSAM;
  • use of hash-matching to prevent the sharing of non-consensual intimate imagery and terrorist content; and
  • crisis response protocols for emergency events (such as last summer’s riots).

And today’s codes and guidance are part of a much wider package of protections, with more consultations and duties coming into force, including:

  • January 2025: final age assurance guidance for publishers of pornographic material, and children’s access assessments;
  • February 2025: draft guidance on protecting women and girls; and
  • April 2025: additional protections for children from harmful content promoting, among other things – suicide, self-harm, eating disorders and cyberbullying.

The heart of the new Act and Ofcom’s code are absolutely in the right place, even if the road to hell is paved with good intentions. The internet can be a heaven for some of the most vile hate, bullying, racism, child abuse, and terrorism etc. Whole communities have even sprung up around these topics, and hostile governments often exploit them.

Suffice to say, the desire to rid the online world of such things is more than understandable – particularly for those who have suffered the most. In keeping with that, it’s easy to see why the new laws have been able to attract so much support from the wider electorate and cross-party MPs. But the potential problem is not with that goal, it’s with the overly-broad and feverishly complex sledgehammer approach to achieving it.

The wrongful assumption seems to be that all sites will already have the necessary development skills, budget, knowledge, legal experience and time to implement everything. But what may be viable for bigger sites, is not workable for everybody else, especially smaller sites that lack the necessary pieces to stand any realistic chance of properly implementing such complex rules (e.g. Ofcom’s risk assessment guide alone is 84 pages long). More support should be provided for those.

Some sites may thus respond to all this, and the risk of increased legal liability, by seeking to restrict speech through the removal of user-to-user services or the imposition of much more aggressive automated filtering systems, which raises the risk of excessive overblocking (i.e. censorship by the backdoor of extreme liability).

However, the new rules will also seek to give users an avenue of appeal for any removed content, which must be replaced if found to have been wrongfully removed. But not all third-party systems work that way and this risks putting sites that allow user-generated content (millions of them) into a bit of a damned if they do, damned if they don’t boat. The risk of an intolerable level of liability and legal complexity is not to be understated in all this.

Ofcom has said they will “offer to help providers comply with these new duties“, which at present mostly seems to consist of various complex documents that, in some cases, require a degree in regulatory waffle and law to fully comprehend. But they do plan to introduce a new tool in early 2025 to help providers check how to comply with their illegal content duties, and there’s another tool for checking if the rules apply to you.

The regulator also said they were “gearing up to take early enforcement action against any platforms that ultimately fall short“, which is likely to cause most concern for the big social media sites, particularly those that have become a bit lax of late in terms of moderation (fingers tend to point toward ‘X’). Suffice to say that there are still a lot of unknowns with the new law and the next few years may be a bit bumpy.

Ofcom’s First Edition Codes of Practice and Guidance
https://www.ofcom.org.uk/../statement-protecting-people-from-illegal-harms-online/

Recent Posts