Online Safety Bill: Will UK’s new law protect people from harm online?

The final draft of legislation designed to protect people from “harmful content” online goes before Parliament today, but critics warn it is likely to have unintended negative consequences


17 March 2022

A man using a phone

PeopleImages/Getty Images

The final draft of the UK government’s long-awaited legislation designed to protect people from “harmful” content on the internet is today being presented to Parliament.

The Online Safety Bill puts the onus squarely on technology companies to spot anything deemed harmful – but not necessarily illegal – and remove it, or face stiff consequences. Critics say it is well-intentioned, but vague, legislation that is likely to have negative unintended consequences.

Nadine Dorries, the UK’s secretary of state for digital, culture, media and sport, said in a statement that tech firms “haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms”. But it remains unclear how government will decide what is, and what is not, “harmful” and how technology companies will moderate content according to those decisions.

What does the final draft propose?

The legislation is wide-ranging. There will be new criminal offences for individuals, targeting so-called “cyberflashing” – sending unsolicited graphic images – and online bullying.

Technology companies such as Twitter, Google, Facebook and TikTok also get a host of new responsibilities. They have to check all adverts appearing on their platforms to make sure they aren’t scams, while those that allow adult content will have to verify the age of users to ensure they aren’t children.

Online platforms will also have to proactively remove anything that is deemed “harmful content” – details of what this includes remain unclear, but the announcement today mentioned the examples “self-harm, harassment and eating disorders”.

A preview of the bill in February mentioned that “illegal search terms” would also be banned. New Scientist asked at the time what would be included in the list of illegal searches, and was told no such list yet existed, and that “companies will need to design and operate their services to be safe by design and prevent users encountering illegal content. It will be for individual platforms to design their own systems and processes to protect their users from illegal content.”

The bill also gives stronger powers to regulators and watchdogs to investigate breaches: a new criminal offence will be introduced to tackle employees of firms covered by the legislation from tampering with data before handing it over, and another for stopping or obstructing raids or investigations. The regulator Ofcom will have the power to fine companies up to 10 per cent of their annual global turnover.

Will it work?

Alan Woodward at the University of Surrey in the UK says the legislation is being proposed with good intentions, but the devil is in the detail. “The first issue comes about when trying to define ‘harm’,” he says. “Differentiating between harm and free speech is fraught with difficulty. Some subjective test doesn’t really give the sort of certainty a technology company will need if they face being held liable for enabling such content.”

He also points out that tech-savvy children will be able to use VPNs, the Tor browser and other tricks to easily get around the measures relating to age verification and user identity.

There are also concerns that the bill will cause technology companies to take a cautious approach to what they allow on their sites that ends up stifling free speech, open discussion and potentially useful content with controversial themes.

Jim Killock at the Open Rights Group warns that moderation algorithms created to abide by the new laws will be blunt instruments that end up blocking essential sites. For instance, a discussion forum offering mutual support and advice to those tackling eating disorders, or giving up drugs, could be banned. “The platforms are going to try to rely on automated methods because they’re ultimately cheaper,” he says. “None of this has had a great success record.”

The government claims that “harmful” topics will be added to a list and approved by Parliament. This is intended to remove grey areas and prevent content that would be legal under the new measures from inadvertently being removed, but some have taken it as reassurance that controversial opinions will be protected. For instance, The Daily Telegraph reports today: “‘Woke’ tech firms to be stopped from cancelling controversial opinions on a whim”.

When will it become law?

The bill will be put before Parliament on 17 March, but it needs to be approved by both houses and receive royal assent before it can be made an act and become legally binding. This process could take months or even years and there are likely to be more revisions.

What do technology companies make of it?

Anything that increases the burden of responsibility and introduces new risks for negligence won’t be popular with tech firms, and companies that operate globally are unlikely to be pleased at the prospect of having to create new tools and procedures for the UK market alone.

Google and Facebook didn’t respond to a request for comment, while Twitter’s Katy Minshall says “a one-size-fits-all approach fails to consider the diversity of our online environment”. But she added that Twitter would “look forward to reviewing” the bill.

More on these topics:

Source link