No constant government surveillance, it’s non-intervention that will help India’s internet safety

OWhen India’s Information Technology Act 2000 was first enacted, it conferred a “conditional safe harbor” on “intermediaries”., which include social media platforms, instant messaging portals, and entities acting as conduits connecting users. This benefited digital players, who were now exempt from liability for third-party information actions after meeting specific due diligence criteria.

However, these obligations have become stricter after being updated by the Information Technology Rules 2021 (Guidelines for Intermediaries and Code of Ethics for Digital Media), also known as the Rules for Intermediaries.

The proposed amendments to the rules “put the interests of digital India first” by placing a Grievance Appeals Committee (GAC) at its centre, set new accountability standards for social media companies and require that they respect the constitutional rights of Indian citizens.

This committee would serve as the government authority to appeal the intermediary’s grievance process. Unlike a statutory regulator, this authority would report to a Union ministry and include a chairman and government-appointed members.

Proponents of social media regulation hailed these due diligence requirements as a step towards greater accountability and the introduction of government oversight mechanisms. However, these rules contradict previous expectations that social media platforms and intermediaries must take a hands-off approach to benefit from Safe Harbor protection.

The IT Act 2000 safe harbor was conditional. First, it was granted only to intermediaries whose function was limited to providing access to a communication system. Second, the intermediary did not participate in the transmission of the information by initiating or modifying the content of the transmission and by selecting the recipients. But the new interim rules expect players to meet several onerous obligations to demonstrate compliance and maintain their safe harbor, failing which they will be punished under applicable laws.

It would be remiss not to mention that the Supreme Court, in its landmark decision, Shreya Singhal v. Indian Union, considered that intermediaries would only be obligated to remove content from their platform after receiving an order from a court or government authority. Similarly, actual knowledge would only be attributed to an intermediary when a court order or government notification notifies the intermediary of infringing content on its platform.

The creation of the GAC appears to fill a distinct void: the lack of government authority to order intermediaries to take down content in accordance with the dictates of the Supreme Court. However, this raises significant concerns about its independence. Social media platforms fear government influence over the GAC could lead to selective crackdowns to curb dissent.

Concerns include the lack of a specific appeal mechanism to challenge committee decisions. Moreover, unlike other statutory authorities such as the Securities and Exchange Board of India (SEBI), the Insurance Regulatory and Development Authority (IRDAI) or the Reserve Bank of India (RBI), which were established by statute of Parliament, this ‘committee’ is established by rules – via powers conferred on the government – rather than by the legislature.

Accordingly, the GAC cannot be the final arbiter of a grievance with a social media platform, as that would leave, in the absence of an appeal mechanism, only the Constitutional Courts could be seized for a appeal. While the courts should continue to be the final arbiter of content to be taken down, this intermediate appeal mechanism is unlikely to reduce the burden of court cases by providing a slow and cumbersome solution for users and media platforms. social. The mechanism provides that a user’s grievance would first be referred to the platform, then to this appeal committee (which cannot be considered a quasi-judicial authority), then, inevitably, to the courts.


Read also : SC will issue a verdict on the pleas concerning the interpretation of the provisions of the anti-money laundering law


Hands-off approach

By providing an intermediate authority to handle user complaints, policymakers have reversed the simpler approach of requiring intermediaries to take a zero-intervention stance (elimination of discrimination and selective suppression of free speech by intermediaries). This would have allowed intermediaries to remain mere “conduits” that cannot manipulate content and maintain end-to-end encryption.

The approach is based on the proactive reporting of fake news, threats of violence or other content that breaks the law by the affected users rather than the platforms, leaving it to enthusiastic law enforcement and the judiciary to intervene and impose sanctions. The concerns of a large number of active users in India about allowing government surveillance of private communications on messaging platforms, and the large number of disturbing complaints, have made this approach valid in policy, but not in practice. It also depends largely on the assumption that intermediaries would cooperate with the government, unconcerned with personal data privacy laws, to improve access and monitoring of its users.

Finally, the effectiveness of this approach in stopping the spread of malicious content relies on the validity of three assumptions: first, that users will promptly report malicious content, second, that authorities will quickly determine if content is malicious and order its removal, and third, that large numbers of users will not frivolously appeal complaints to overwhelm volumes referred to an appeals grievance board.


Read also : ‘Kashi-Mathura disputes should be settled in court, like Ayodhya,’ says RSS media chief


Defining self-regulation

At the other end of the policy spectrum would be an approach that mandates self-regulation but sets well-defined and unambiguous parameters. This states that the only discretion intermediaries have is simply to determine whether content qualifies for removal once it comes to their attention. However, even with this approach, the knowledge of an intermediary must correspond to the touchstone of real knowledge, lest there be a mistaken expectation that an intermediary – a private entity – will engage in the self-monitoring of content posted by users.

It is an established position that any content removed is a restriction on freedom of expression. Naturally, then, legislation enacted by Parliament, rather than rules issued by the incumbent government department, should be responsible for defining appropriately the types of content that should be removed while outlining the circumstances in which they would do so. . Moreover, these cannot be lazily defined with broad strokes and must pass proportionate, reasonable, and constitutionally sound tests.

A government-appointed appeals authority empowered to determine at its discretion what content should be removed leaves the door wide open for abuse of power and the suppression of dissent. A recent example includes the abuse of section 66A of the Information Technology Act 2000, which was used to infringe freedom of expression on the grounds that such contested content was ‘grossly offensive’ Where ‘threatening’…to the point that the Supreme Court eventually struck it down. This underscores the need to avoid government surveillance of freedom of expression on the Internet. Moreover, the ebbs and flows of governments tend to color a governmental authority’s interpretation of even unambiguous legislation.

In this context, it would be too cautiously optimistic to entrust discretionary power to government-supported bodies on the assumption that it will not be abused.

The creation of government-controlled, tiered grievance redress mechanisms within the executive, ostensibly to respond to the volumes of complaints, is likely to burden the judiciary with calls for intervention. Intermediaries should feel empowered to act quickly and suspend accounts once reported, without fear of losing their safe haven.

However, this can only be achieved when the law itself adopts a litmus test style of defining the parameters of what merits censorship. When legislation defines and codifies these thresholds, the legislation is subject to public scrutiny, making it much easier to test whether these restrictions are reasonable impediments to free speech, rather than leaving an intermediary or discretionary authority to interpret the law.


Read also : Twitter will place Center ‘block orders’ ahead of K’taka HC in sealed covers


Prescriptive legislation – foreseeable liability

As the law constantly catches up with the evolution of technology and its use cases, it is impossible to anticipate and prescribe what an intermediary must do in all circumstances. It also diverts legislative efforts towards legislation of exception rather than rule. Instead, the law can take a principles-based approach for intermediaries to remove offensive content and set a clear line test to describe the circumstances in which they not be responsible for hosting controversial content.

By leaving so little room for interpretation, intermediaries who adopt a perverse interpretation or do not objectively apply clearly defined criteria then rightly expose themselves to punitive measures imposed by the courts. Legislating opt-out powers to intermediaries, albeit within well-defined limits to prevent the selective removal of content, would allow intermediaries to adopt the mechanisms best suited to their platforms to enforce the principles set out in the legislation.

Undoubtedly, intermediaries should be held more accountable for unbalanced or discriminatory censorship of information and promotion of misleading paid content (because this engages the responsibility of the advertiser). By extending the Safe Harbor to intermediaries so that they are not held liable for taking action against infringing content, rather than withholding it for inaction, we would incentivize them to moderate content fairly and proactively. Rather than conferring discretion on a governmental authority to correctly interpret the law, without personal or political bias, the law itself should be written in a more binary form.

Akash Karmakar is a partner in the law firms of Panag & Babu and leads the firm’s financial technology and regulatory advisory practice. They tweet @kashxkarmakar. Views are personal.

(Editing by Zoya Bhatti)

Comments are closed.