Tech companies which fail to prevent harm face ban in UK

Ahead of the publication of its full response to the Online Harms White Paper, the government has laid out some details of its plans to mitigate a range of online harms: from child exploitation to terrorist propaganda.

 According to the government, online platforms which fail their duty of care to users, such as by not removing dangerous content, could be slapped with multimillion-pound fines and even a ban in the UK.

Digital Secretary Oliver Dowden said that the Online Safety Bill, which will be introduced to Parliament next year, will usher in a “new age of accountability” for online platforms.

“We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech,” he said, “This proportionate new framework will ensure we don’t put unnecessary burdens on small businesses but give large digital businesses robust rules of the road to follow so we can seize the brilliance of modern technology to improve our lives.”

Ofcom will have responsibility for regulating online companies; the rules will apply to any company hosting user-generated content online which can be accessed by people in the UK, or which enables them to interact with others online.

Ofcom will have the power to fine companies up to £18m or 10 per cent of global turnover (whichever sum is higher) for failing their statutory duty of care to users. Platforms that do not comply with the rules can be blocked from being accessed in the UK. Ofcom will require companies to use “targeting technology” to identify and remove illegal material.

Proposals for criminal liability for senior executives of companies which fail to comply with the rules have been eased; the government said that it would introduce these powers through additional legislation if they are considered necessary.

The government will establish a two-tier system. Companies with large online platforms and many features deemed high-risk – including Facebook, Twitter, Instagram, and TikTok – will be placed in “Category 1”.

These companies will be required to take steps to address illegal content and activity; publish transparency reports detailing their work to address online harms; introduce extra protections for children; and to assess what “legal but harmful” content should be allowed in their terms and conditions. This may include cyberbullying and deliberately addictive or otherwise exploitative apps, while Dowden said that this could also include anti-vaccination disinformation. The government is working with the Law Commission to decide on whether the promotion of self-mutilation should be made illegal.

“Category 2” platforms will include private messaging apps, dating services, and pornographic websites.

“A 13-year-old will no longer be able to access pornographic images on Twitter. YouTube will be banned from recommending videos promoting terrorist ideologies. Criminal anti-Semitic posts will need to be removed without delay, while platforms will have to stop the intolerable abuse that many women face,” Dowden wrote.

The Home Secretary Priti Patel said: “We are giving internet users the protection they deserve and are working with companies to tackle some of the abuses happening on the web. We will not allow child sexual abuse, terrorist material, and other harmful content to fester on online platforms. Tech companies must put public safety first or face the consequences.”

Private communications platforms such as WhatsApp and closed social media groups will be included within the scope of the regulations. However, the legislation will not cover online articles and comment sections, in an effort to balance protection of free expression with safety.

Julian Knight, chair of the Digital, Culture, Media, and Sport (DCMS) Committee, said: “Today marks a major step forward in laws that will see powerful tech companies held to account. Credit is due to the work of the DCMS Committee in the last parliament for its extensive investigation into disinformation and fake news that led the charge for online regulation.”

“A duty of care with the threat of substantial fines levied on companies that breach it is to be welcomed. However, we’ve long argued that even hefty fines can be small change to tech giants and it’s concerning that the prospect of criminal liability would be held as a last resort.”