Friday, January 4, 2019

Move Fast And Break Things: Government’s new rules on internet regulation could kill innovation and privacy

Source: Times of India dated 04.01.2019

“Move fast and break things” is the now infamous mantra associated with the Silicon Valley internet giants. It’s an approach that prioritised speed of creation, even if mistakes were made on that dizzy path. As it turned out, their blunders were to have a serious impact on society, elections and democracy globally.
Now the Indian government risks falling into the same trap. Last week, it hurriedly revealed proposals to radically change the “Intermediary Liability” rules for internet companies, effectively requiring all internet services to actively censor “unlawful” user content or else face liability for such content.
The aim of holding large social platforms to higher standards of transparency and accountability is a valid one. But the proposals ask internet users to put even more trust into these companies, to decide what content is appropriate and what isn’t, and they haven’t earned that trust yet. Beyond large social media companies, the rules create an existential threat to the many other services they apply to. Perhaps it is the government’s turn to slow down now.
If the internet has been characterised by permission-less innovation and communication, this can be credited in large part to the very rules that are today under threat. The new rules are proposed under Section 79 of the Information Technology Act, which, like its global counterparts, currently ensures that companies generally have no obligations to actively censor content.
Until they know about them, the platforms have only limited liability for the illegal activities and postings of their users. In 2015, the Supreme Court clarified that companies would only be expected to remove user content if they are directed by a court to do so. The new rules turn this logic on its head and propose a zero-tolerance approach to “unlawful content”, where services must “proactively” purge their platforms of such content or else potentially face criminal or civil liability.
The term “unlawful” is not defined, but would likely include all content that is illegal under various laws in India. This ranges from child sexual abuse and videos of rape, to hateful speech against particular religious, caste or other groups, to content that is defamatory or infringes copyright.
Each of these involve legal standards that are vastly different, as is the surrounding context that determines their legality. Take for example, whether a video of a provocative speech was simply a case of advocacy or an incitement to violence. These are complex inquiries, and must be steeped in factual, social and political context.
Social media companies have been in the spotlight recently over controversial decisions to remove content that did not meet their own content guidelines, leading to calls for greater transparency. With the proposed rules, however, they will be further incentivised to “take down first, think later”, or prevent such content from surfacing at all.
Presumably to address the practical questions of scale, the draft rules require companies to deploy “automated tools to filter content”. Rather than creating more transparency about – or fairness in – platforms’ content moderation policies, this will only encourage a black box approach that is bound to lead to inaccurate and opaque decisions on content.
In encouraging automated tools the government is giving primacy to the speed and quantity, rather than the quality, of content removals. These are crude and inappropriate metrics of success where critical fundamental rights are at stake.
Even as the public outcry around unchecked government surveillance is growing, the draft rules also take another step backwards on the question of privacy. The rules also require these services to make available information about the creators or senders of content to government agencies. For end-to-end encrypted messaging platforms like WhatsApp and Signal, this could mean companies will be expected to intentionally store records of who sent messages to whom, with the sole purpose being government surveillance.
The government has justified these moves by invoking “instances of misuse of social media by criminals and anti-national elements”, but the rules they propose go far beyond the handful of companies they refer to. For small and medium-sized online services, as well as start-ups, for example, these content control obligations will be a disproportionate burden.
And the expansive definition of “intermediaries” in these rules would even include internet service providers, browsers and operating systems. For such entities, content control obligations seem entirely misplaced and inapplicable, and yet they create a legal risk that can’t be ignored.
In the full glare of media attention, the government has invited feedback. What this proposal needs, however, is a complete rethink. Building a rights protective framework for tackling illegal content on the internet is a challenging task. But any way you look at it, undermining encryption and outsourcing content regulation to companies are blunt and disproportionate tools.
For better or worse, our fundamental freedoms and rights online are intertwined with the laws that apply to the mediums we use to communicate. This is not about the concerns of a handful of companies alone. Rather than see this move through the trope of big tech versus big brother, we must understand that it is, above all, a threat to internet users.
The writer is a lawyer and public policy adviser at Mozilla

No comments:

Post a Comment