A Duty of Care for Technology Companies


Make Big Tech take responsibility for its content, writes Will Perrin, a trustee of Carnegie UK Trust.



 

Despite the profound divisions and policy paralysis of Brexit the UK finds itself at the forefront of a new model for managing harms online. In a typically British way, the new model isn’t the result of a grand strategy but emerging from the work by dozens of forward thinking, committed activists, charities, legislators, regulators, academics and officials. Europe and the United Nations are taking notice.

 

I have been heavily involved in the work on a statutory duty of care for social media companies, enforced by a regulator.  A proposal upon which the Conservative government and Labour Party find themselves in broad, if not detailed agreement. The duty of care stands alongside cross-bencher Baroness Kidron’s work with all parties in the Lords on an Age Appropriate Design Code, enforced by the ICO. Beeban Kidron has written for Progressive Centre about the importance of a duty of care for technology, but where did the idea come from?

 

The Origins of ‘Digital Duty of Care’

 

In December 2017, I went to a meeting where children’s and women’s rights charities described a litany of awful things that were happening in online media to those they represent. Around the horrors, was a general air that nothing was goingto be done by the government and worse a feeling that nothing couldbe done. I had spent years in government working on regulatory policy and was startled that the law was not working in the face of egregious harms. Many people had fallen for the ‘it’s too big’, ‘it’s too complicated’ ‘the future is just like this’ spiel that seemed to surround the tech sector.

 

I tracked down my old colleague Professor Lorna Woods, who literally wrote the text book on European law and we set out to do some work with Carnegie UK Trust on how regulation could protect the vulnerable online. We had worked with Anna Turley MP in 2016 on what became her Private Members Bill on online harms. but that had got lost in the 2017 election. We noticed straight away that there was very little technical work on regulation – not least because most regulatory folk had been employed by big tech. Successive governments from Brown to Cameron had welcomed tech investment with open arms and had taken only palliative steps. These were slow to take effect to tackle the most egregious harms. The Conservative 2017 manifesto, for all its faults, had a thoughtful discourse on how to do more for online harms, but few concrete policy proposals.

 

For all the free market, entrepreneurial blather of Silicon Valley the big tech platforms that host content made by others are fundamentally creatures of regulation. Regulation shields them from much of the responsibility of newspapers, tv companies, radio stations etc from what people post on their platforms.  The user generated content platforms are dependent on rules in the USA, NAFTA and Europe that make them a ‘mere conduit’ for the material of others. It has allowed them to grow to colossal size without having to invest much (compared to their revenues) in responsibility.  Back in 2000 when less than 5% of the population had used the internet and no one knew what would happen this approach made sense. It no longer does today.

 

The E-commerce directive by the EU had limited liability of tech platforms since 2000. However, the ‘Right to be Forgotten’, initiated by a single Spanish litigant, has established that Google can be held to account under European data protection Law. This reminded us that the immunity provided by the e-Commerce Directive was not a blanket exclusion from the law, but limited to penalties arising from the hosting of the content of others.  Every pixel a user sees on a technology platform is a result of decisions taken by the company that runs it, decisions about the terms of service, the software deployed and the resources put into enforcing the terms of service and maintaining the software.  None of this is the content of others and all of it results from the choices of the platforms. We noticed that the E-commerce directive had always allowed for member states to apply a duty of care and we examined how a government could use this to regulate technology companies.

 

 

What ‘Duty of Care’ Looks Like

 

We set out an approach that is systemic, rather than palliative. At the heart of the new regime would be a new ‘duty of care’set out by Parliament in statute.  The statutory duty of care would require most companies that provide social media or online messaging services used in the UK to protect people from reasonably foreseeable harms that might arise from use of those services. The approach is risk-based and outcomes-focused.  A regulator would ensure companies delivered on their statutory duty of care and it would have sufficient powers to drive compliance.

 

Social media service providers should each be seen as responsible for a public space they have created, much as property owners or operators are in the physical world. Everything that happens on a social media service is a result of corporate decisions: about the terms of service, the software deployed and the resources put into enforcing the terms of service.

 

In the physical world, Parliament has long imposed statutory duties of care upon property owners or occupiers in respect of people using their places, as well as on employers in respect of their employees. Variants of duties of care also exist in other sectors where harm can occur to users or the public. A statutory duty of care is simple, broadly based and largely future-proof.  For instance, the duties of care in the 1974 Health and Safety at Work Act still work well today, enforced and with their application kept up to date by a competent regulator. A statutory duty of care focuses on the objective – harm reduction – and leaves the detail of the means to those best placed to come up with solutions in context: the companies who are subject to the duty of care.  A statutory duty of care returns the cost of harms to those responsible for them, an application of the micro-economically efficient ‘polluter pays’ principle.  And as we said above, the E-Commerce Directive permits duties of care introduced by Member States.

 

Parliament should guide the regulator with a non-exclusive list of harms for it to focus upon. These should be: the stirring up offences (such as racism and misogyny), harassment, economic and consumer harm, emotional harm, harms to national security, to the judicial process and to democracy. Parliament has created regulators before that have few problems in arbitrating complex social issues and these harms should not be problematic for the regulator. Some companies would welcome the guidance. We judged that the existing regulatory regime for the traditional media was adequate and they should not be covered by the new regime. The goal is to address the issues of social media, not re-run the Leveson debate.

 

We think that the regulator should be the Office of Communications (OFCOM), the existing media regulator. OFCOM has a long track record of standing up to big companies, the confidence and proven processes for making evidence-based judgements on tricky societal issues and have grown before when new responsibilities are added to them. Even adding the substantial new responsibilities for online harms would still make OFCOM smaller than most government departments. Most importantly they could get going straightaway. We judged that this could all be put in place quickly with a short and simple bill.

 

We worked with activists, media groups and some technology companies to test and refine the general concept during 2018.  The National Society for the Prevention of Cruelty to Children worked with Lawyers Herbert Smith Freehills to validate a duty of care approach.  Also in 2018, the government announced it would regulate to address the broad swathe of harms caused by the internet. This was a surprise for many technology companies, despite the manifesto and the Prime Minister’s 2018 Davos speech where she said ‘protections should be in place to help keep people safe online'. The Labour Party adopted a chunk of our recommendations in Autumn 2018, endorsed by Tom Watson MP and rooted in excellent work by Liam Byrne MP on the history of industrial regulation.  During 2018-19 select committees and All Party Parliamentary Groups also endorsed the work. The Lords Communications Committee explicitly called for a duty of care and the Commons Science and Technology Committee even recommended rapid legislation to have OFCOM empowered to act by Autumn 2019.

 

In April 2019, the government published its Online Harms White Paper that adopted a statutory duty of care on internet services that host or discover user generated content, enforced by either a new regulator or OFCOM through codes of practice. We broadly support the government’s work but differ on a range of points. In particular, we disagree with their proposal to apply the statutory duty of care to below the line comments on new articles on papers own websites, which we feel should be dealt with by the existing regulatory regimes. We also think they have not provided enough reasons to consider a regulator other than OFCOM – the government should just get on and give them powers. This would allow OFCOM to manage the debate about how to balance rights that may be infringed by another’s speech – a balance that not just the free speech lobby are passionate about but the people who are harmed or silenced by the speech of others.  The government also missed a trick in not bringing online consumer harms, into the scope of the duty of care.

 

 

The Emerging UK Model

 

There are many components to the emerging UK model that tackle harms from online services from an economic or societal perspective. Action in the UK began as long ago as 1996 when far sighted activists and philanthropists worked with the government to create the Internet Watch Foundation, tackling the evils of child pornography. Today the British Board of Film Classification is ready to introduce age verification checking for online pornography once final regulatory hurdles have been cleared. On the economic front, the Competition and Markets Authority is implementing former Obama advisor Jason Furman’s ingenious report on competition in digital markets. The Information Commissioner (ICO) has just published an expert, landmark report that raises grave questions about the legality under the General Data Protection Requirement (GDPR) of the online advertising model underpinning online services. The ICO work will provide a strong foundation for work across Europe to enforce people’s rights against big tech in the GDPR system. The Advertising Standards Authority has a major programme of work planned to crack down on misleading online advertising. We should add to this concrete action to tackle harms to consumers online by bringing those into the ambit of a duty of  and tougher measures on use of digital media in political campaigning.

 

 

Future of Online Harms Regulation

 

UK action is now being picked up in Europe, which will be important even after we leave the EU, given that tech companies and the government want tech regulatory convergence with Europe.  The incoming European Commission is likely to rewrite completely the e-commerce directive that shaped the creation of the mega-platforms. The original directive allowed for Member States to implement a duty of care, perhaps the re-write will make more explicit that such a duty is needed? New Commission President Ursula Von Der Leyen called for online blocking of child pornography as long ago as 2009 when she was families minister in Germany. Her leadership and the welcome persistence of Margrethe Vestager in the European Commission means they are unlikely to let-up in Brussels. In France, an expert group commissioned by the government has reported that social media should be regulated by a package of measures including rules like the ‘Anglo-Saxon duty of care’.

 

This is only a fraction of the regulatory action underway to reduce online harms. Across Europe (and the UK) the implementation of new rules on online video will require stronger controls for children. We are talking to Canadian and New Zealand civil servants as they rewrite their laws post Christchurch. The Indian government is considering rules to reform intermediary liability. The United Nations is writing guidance on how to interpret the UN convention on Rights of a Child in a digital context that should provide a stronger platform for more child protection. In Ireland, the government has brought forward proposals to tackle online harms after an excellent campaign by Sinn Fein representative Donnchadh O’Laighaire TD. In the Netherlands, the competition regulator is also looking at special rules for digital platforms.

 

 

Conclusion

 

I am proud to have played a part in the formation of a UK statutory duty of care enforced by a regulator, but many, many others have been active for far longer to build this emerging British model. The UK is making outstanding progress on digital regulation, but there is still a long way to go.

 

The many parts of the emerging model touch on key points of failure in digital markets and the harms that arise. Markets will work better if external costs arising from the operation of online platforms are returned to the business model of those who make them. Online businesses have always been shaped by regulation and will continue to be.  We have broad political consensus, we have the solution, now it’s time to muster the will to act and legislate for a duty of care.

 


 

 

About the author

William Perrin is a trustee of several charities, the founder of tech start ups and a community activist. William was instrumental in creating OFCOM, and was technology policy advisor and a private secretary to Prime Minister Tony Blair MP from 2001-2004. He is a trustee of Carnegie UK Trust, Good Things Foundation, Indigo Trust, The Philanthropy Workshop and 360Giving. 



Subscribe

Hear more about our upcoming work, events and ideas.


Subscribe to updates