From October 2017, social media companies are liable to face fines of up to €50m in Germany if they fail to remove “obviously illegal” content. MPs voted for the Netzwerkdurchsetzungsgesetz (NetzDG) law on the 30th June – the last legislative day before the Bundestag closed for summer – setting a precedent of social media companies being held responsible by governments for the moderation of content on their own platforms. With Prime Minister Theresa May and French President Emmanuel Macron both having shown support for cooperating on the implementation of a similar, counter-terrorism-specific legislative initiative in the UK and France, it is worth considering the debates that surround this contentious issue, and whether the threat of fines risks forcing tech-companies “offside”.
Standing on the steps of Downing Street on the 4th June – a day after the London Bridge terror attacks – the British prime minister accused social networks of not having done enough to curb the spread of extremist content. A week later, Theresa May announced that she was drawing up proposals with France, to crack down on social media companies who fail to remove “terrorist propaganda” from their networks. These proposals, however, have faced more resistance than was initially expected. One of the most notable critics is the UK’s Independent Reviewer of Terrorism Legislation, Max Hill QC, who has questioned whether Theresa May’s suggestions are the best course of action, stating that he would “struggle to see how it would help if our parliament were to criminalise tech company bosses who ‘don’t do enough’.”
His questions – especially “How do we measure ‘enough’?” and “What is the appropriate sanction?” – touch upon some of the issues at the heart of what critics argue to be tech policy intended to direct the development trajectory of private sector companies. From the perspective of Facebook, Microsoft, Twitter, and YouTube, their existing efforts, which include active participation in the EU Internet Forum; the creation of a shared industry database of unique terrorist image and video hashes in December 2016; and the formation of the joint “Global Internet Forum to Counter Terrorism” in June 2017, most certainly constitute “enough”. The latter two, as projects which have driven rival companies to collaboration, serve as notable examples of the industry displaying due care and attention to government demands, most likely for fear of further regulation. His reference to sanctions is also apt, as punitive consequences assume that a social-media company had the scope and capability to resolve an issue, and that their failure to do so was caused by an active decision not to – a presumption that will no doubt be at the core of inordinately expensive legal battles, if legislation does appear. As Max Hill QC aptly put it himself, they make “eye-watering sums of money from our everyday chatter”, and by no means are they strangers to litigation.
Using sanctions, rather than incentives, can also be criticised for causing social media networks to err on the side of censorship. Social networks often try hard to correlate with Western values of free speech, political freedoms, and the age-old tenet of anonymity, and the thoughtless deletion of content to avoid fines would do great damage to this reputation. Max Hill’s remark, that “we do not live in … [a totalitarian regime] where the internet simply goes dark for millions when government so decides … our democratic society cannot be treated that way”, also resonates with human rights activists, who believe that it’s a small step to go from fining social media companies, to Home Secretary Amber Rudd’s suggestion of outright banning end-to-end encryption – a technology with a long history of protecting the notion of political opposition in repressive regimes.
In order to further understand the predicament that the proposed responsibilities present to social media networks, it is also worth discussing these tech companies’ internal capabilities, values, and development trajectories. A number of recent technologies, particularly in the sphere of the natural language processing and reinforcement learning subsets of artificial intelligence, have progressed to a standard at which certain challenges, such as identifying extremist content online, seem far more feasible than they would have done a few years ago. Virtually every social media company will have data scientists and software engineers allocated to the research of these technologies, with a number of recent acquisitions additionally showing that tech companies are willing to fork out extortionate sums for world-leading AI researchers, if they do not have them already. Now, whilst the success of these technologies is notoriously bound to a number of tricky variables, for example the quality of the training data which these algorithmic models are initially trained upon, the prospect of devoting research and development time to furthering cutting-edge technology is far more attractive than hiring thousands of moderators to trawl through reported content. After all, the sheer amount of content published on leading social networks makes it near impossible (practically, if not financially) to moderate the content manually, and the development of more accurate natural language processing models can be used in a vast range of different products, to achieve an even wider range of objectives.
Due to this, a number of these companies would question the choice of legislating fines as consequences, rather than government-led initiatives or incentives to encourage research in these fields. Proponents of incentives would argue that the companies themselves have a vested interest in these technologies progressing, and that the threat of fines does little to stimulate innovation, compared to incentives that astutely appreciate the current context of technological developments.
Whether or not Prime Minister Theresa May is correct in stating that social media companies offer a “safe space” for extremist ideology, her proposed fines raise many fears that the government is too keenly using the proverbial stick rather than the carrot, against an enemy it cannot afford to have. Social media has become an unquestionable influence in our lives, yet its politicisation brings with it dilemmas that draw upon questions of privacy, freedom of speech, and determinative policy, in a manner where contextual ignorance could hamper dear technological advances and skew political balances worldwide. To that effect, the question of fining social media companies for a failure to remove content carries with it a great burden of implications, particularly if there are other options on the table that could help us achieve our technological and political goals elsewhere, whilst keeping social media companies “onside”.