The questioning of social media companies about objectionable content on their platforms is a gathering storm. As the debate churns on what cannot be posted on social media platforms, and if platforms can take down content on their own should they find it objectionable, the rub lies largely in this context.
The fundamental right to freedom of speech and expression of Indian citizens under the Indian Constitution can be exercised only against the state. The right is subject to reasonable restrictions specified in Article 19(2) of the Indian Constitution. These restrictions are also applicable to the press.
This did not mean that citizens have the absolute right to publish views on any private forum whatsoever.
Until the advent of social media, adequate forums to express oneself existed, notably the radio, print media, or television. The prerogative of these platforms was to decide on the content they published. In effect, the content was subject to pre-approval of the relevant medium.
Hence, if private media refuses publication of a citizen's views, the citizen cannot enforce its fundamental right against a private party. It is only when the state imposes restraints that go beyond Article 19(2), does the citizen have a remedy against the state.
Most of the laws that impose restrictions on freedom of speech have criminal consequences. A plethora of jurisprudence has developed on what material is really seditious, anti-religious or obscene in nature. What may appear to the common man as incorrect material may not be illegal, as the threshold to establish illegality or unlawfulness is quite high.
Social media has offered a new, accessible, influential and pervasive avenue for citizens to express themselves. While traditional media acted as publishers and retained control over what gets published, social media platforms have chosen to position themselves merely as technology platforms.
They have a safe harbour for material published on their platforms by third parties, subject to some due diligence obligations. The Supreme Court has also clarified that platforms are not expected to voluntarily take down illegal content unless directed by the appropriate authority to do so. Thus, apart from auto filters and technology enabled tools that sieve out some specific type of content, there is no pre-approval for content published on these platforms.
Slowly but steadily, social media platforms have become an important and compelling means of expression of self, of mobilising and, at times, even formulating public opinion. A powerful tool for mass reach, the platforms provide an equivalent of a public space in a digital world, used constantly by millions of users to express their views. This trend, and now a subconscious habit of expressing ourselves on social media, has become so ingrained in our system that any inability to access or express is not tolerated any more.
Expressing oneself on social media has almost achieved the status of 'necessity' and 'habit'. Every user wants this power, no matter how small or big their reach may be. The 'viral' posts, trends, social media mass movements, rise of influencers and celebrity endorsements only reaffirm the power of free speech, powered by social media.
This is where the problem arises.
When government or courts order take down of content, the aggrieved citizen can invoke their fundamental rights and approach the court to protect their freedom of speech.
The catch? Since the content is removed from a platform pursuant to contract terms, better known as terms and conditions of the platform, allowing the platform to remove content, suspend or terminate a user account in certain cases of breach, the dispute willy nilly becomes a pure contractual one.
This needs to be resolved according to the terms of the contract which typically provides for a dispute redressal process. While the user is therefore not remedy-less even in this case, the recourse is not a writ to protect fundamental rights. This legal juxtaposition is not appreciated by users of social media platforms who consider access to and expression on social media akin to their fundamental right.
This has sparked a debate on whether platforms themselves should block or remove content or user, or not.
There are several reasons why platforms may voluntarily want to remove content. Firstly, as a corporate policy, platforms would want to demonstrate that their platforms are safe and happy places and allow only good conduct on their platform. Secondly, advertisers are increasingly becoming conscious of the reputation and ways of working of platforms. This drives platforms to moderate their content, lest they alienate advertisers who seek to avoid association with illegal or undesirable content. Thirdly, in some cases, platforms may lose safe harbour if they do not take down content which is against their own platform policies.
All these appear to be valid reasons for platforms' voluntary take-down actions.
The issue really bites when people or regulators believe there is inconsistent enforcement of the platforms' policies. Interestingly, there is an old English saying in relation to applicability of equity principles by courts. It says: equity varied with the length of the Chancellor's foot. When, as a society, we are aware that even different judges may interpret or apply the same law differently, is it fair for us to expect a much higher standard from the platforms?
Separately, what about user behaviour on social media? Should users be given a free rein with no accountability for their actions or speech? What could be the way forward for social media platforms?
The solution lies in self-accountability of users and social media platforms as well.
It is important to educate users about the impact of using social media, and what is and isn't appropriate behaviour on such mass reach platforms. Users need to be taught the difference between public v/s private information, being discreet and at times tactful with sensitive information. Further, social awareness campaigns discussing issues such as fake news, bullying, harassment, and illegal activities on social media platforms should be created to build a society of responsible, well aware netizens. Grievance redressal mechanisms to immediately report such acts, and consequences of such acts, should be publicised on a large scale to help the vulnerable, and deter the wrong doers.
It is perhaps time to acknowledge that mere removal of content, without offering proper reasoning to the user, does not work anymore. However, considering the amount of content involved, this may not always be feasible.
As far as platforms are concerned, revised intermediary guidelines have been in the works for some time. This could be supported by self-regulation, which will bring in a consistent approach across all social media platforms. User disputes could be managed through this mechanism as well.
The government should also consider setting up of a unit where platforms may voluntarily (sans any obligation) refer content issues for guidance and views. The platforms can therefore be saved from taking a view on their own accord in tricky situations.
These complimentary approaches shall not only heighten user confidence in social media platforms, but also help social media retain its intermediary status. All said, the stakes, noise and consequences may be high enough for the government to ensure that free speech continues to be protected in this new world.
(Aarushi Jain is Leader of Education & Intellectual Property Laws; Gowree Gokhale is Leader & heads IP, Technology, Media & Entertainment Laws at International Law firm, Nishith Desai Associates)