+44 (0)20 7404 0606
Online Harms White Paper - a daunting new world for social media sites
This 98 page tome has provided a welcome distraction from Brexit. The proposals are, by the Government’s own admission, eye-catching: “The measures outlined in this White Paper are novel and ambitious, with potentially far-reaching effects for how our society engages with the Internet.” Judge that for yourselves. What is abundantly clear is that the days of the internet as the ‘Wild West’ or ‘final untamed frontier’ are numbered.
The scene is set thus: “The Government wants the UK to be the safest place in the world to go online … given the prevalence of illegal and harmful content online and the level of public concern about online harms, not just in the UK but worldwide, we believe that the digital economy urgently needs a new regulatory framework to improve our citizens’ safety online…” So the bar has been set high for our legislators, both in taming the untamed and in the vanguard rather than in the wake.
Most people can come up with at least a handful of examples of the online harms in contemplation. The list is, in fact, depressingly long:
- Child sexual exploitation and abuse (“CSEA”);
- Terrorist content;
- Content illegally uploaded from prisons;
- Content encouraging gang violence;
- Sale of opioids and other illegal substances;
- Anonymous abuse;
- Self-harm and suicide;
- Under-age sharing of sexual imagery;
- Disinformation (including fake news);
- Manipulation (including content undermining democracy);
- Abuse of public figures and trolling;
- Not to mention excessive screen time and addiction.
A damning indictment, and yet the response is not to outlaw such behaviour - in many cases it is already illegal; the White Paper does not create any new offences or proscribe any conduct previously not prohibited. Rather it is directed towards the companies that host and enable the sharing and discovery of user generated content or which facilitate public and private online interaction in the form of instant messaging or comments on posts. Facebook, Instagram, Snapchat and YouTube are the mainstream platforms most obviously in scope.
The solution? The creation of a new statutory duty of care to force companies to take greater responsibility for the safety of their users and to tackle harms caused by content or activity on their services: A duty to do all that is reasonable to keep their users safe online. Compliance with this new duty of care is to be overseen by a newly appointed regulator, either by extending the remit of an existing regulator such as Ofcom, a creating a bespoke one from scratch. One of the regulator’s responsibilities will be to publish codes of practice explaining how companies are required to fulfil their new statutory duties. Where the online harms stray into areas of national security or child protection the Government, via the Home Office, will exercise a supervisory role in the drawing up of the Codes. Companies will be expected to take reasonable steps to keep users safe and to prevent other persons (not just users of the services) coming to harm as a consequence of activity on their services. The new regime will be both proportionate and risk based. Companies will be required to take action proportionate to the severity and scale of the harm in question and they will be assessed according to their size and resources and the age of their users. Correspondingly the regulator’s focus will be on those companies that pose the biggest and clearest risk of harm to users, either because of the scale of their platforms or because of known issues with harmful content.
Competing considerations such as freedom of expression and privacy are explicitly acknowledged. The regulator would be obliged to protect users’ rights to freedom of expression.
In relation to privacy, the White Paper distinguishes between types of harm and types of communications channels, though delivering mixed messages in the process: the regulator would not require companies to undertake general monitoring of communications, except where terrorism or CSEA is concerned (para 3.12) and yet (para 4.7) the regulator would distinguish between public and private communications channels with the latter exempt from any scanning or monitoring obligations. An obvious lacuna surely? The criteria for distinguishing between private and public communications channels are a matter for consultation.
The new statutory duty of care would be buttressed by appropriate enforcement powers. These would include the power to levy enforcement notices and fines and the issuing of public notices. Furthermore to ensure the efficacy of the new regime in relation to off-shore operators further categories of sanction would include targeting ancillary services, such as advertising, electronic payments or even search engine ranking; ISP blocking (a statutory weapon also forming part of the BBFC’s armoury in relation to the new age-verification requirements under the Digital Economy Act 2017); and most controversially and headline grabbing, the potential for individual members of senior management to be made personally liable. Ominously the White Paper states “This could involve personal liability for civil fines, or could even extend to criminal liability.”
Process wise, companies in scope will be required to have an effective complaints process in place; if they are unable to resolve a complaint to the user’s satisfaction there would be an, as yet unspecified, independent resolution process. More significantly, aggrieved users could use any findings of the regulator to found a civil claim for breach of statutory duty or breach of contract. The White Paper also trails the idea of “super complaints”, that is group actions, presumably analogous to those permitted under The GDPR.
As mentioned, HMG has decided to avoid creating any new offences for hosting illegal or harmful content. This would have been a radical challenge bearing in mind that the E-Commerce Directive protects such platforms unless or until they have actual or constructive knowledge. The Government has concluded after further consideration that it would not be appropriate or proportionate to transition from knowledge-based liability to publisher-type liability where companies could be liable for all content appearing on their platforms. Correspondingly, continued reliance on the principles of the E-Commerce Directive alone will no longer be sufficient as this permits a purely reactive culture: A much higher level of scrutiny and responsibility is required. Accordingly whilst the E-Commerce regime will remain untouched, the necessary increased accountability will be ensured by the new statutory duty of care upon platforms.
Analogous to the privacy by design principle enshrined in the GDPR the White Paper is mandating technology as part of the solution: Companies will be strongly encouraged to invest in the development of safety technologies to reduce the burden on users to stay safe online.
Predictably the White Paper has its share of detractors, particularly those who could imagine the hand of Putin, Kim Jong-Un or Xi Jinping at work. However the tide of public option will far outweigh those concerns. But there is a long way to go. The consultation closes on 1 July 2019 and there is bound to be a significant clamour to respond. Once the responses have been digested and assessed draft legislation will be published and there will be a huge weight of expectation amongst technologists, privacy and free speech campaigners, consumer groups, not to mention lawyers too, as to how this new duty of care will be framed and users’ new rights expressed. But big changes are in the offing...
This guide is for general information and interest only and should not be relied upon as providing specific legal advice. If you require any further information about the issues raised in this article please contact the author or call 0207 404 0606 and ask to speak to your usual Goodman Derrick contact.