Today's Editorial

Today's Editorial - 26 March 2021

Inside new social media code

Source: By Pranav Mukul: The Indian Express

Citing instructions from the Supreme Court and the concerns raised in Parliament about social media abuse, the government on 25 February 2021 released guidelines that aim to regulate social mediadigital news media, and over-the-top (OTT) content providers.

For social media platforms, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 envisage a category of significant social media intermediaries, the threshold for which will be announced later. In addition, the government said that it wanted to create a level playing field in terms of rules to be followed by online news and media platforms vis-à-vis traditional media outlets.

What is the background of these guidelines?

At a press conference, Law & IT Minister Ravi Shankar Prasad cited a 2018 Supreme Court observation and a 2019 Supreme Court order, in addition to discussion in Rajya Sabha — once in 2018 and then through a report laid by a committee in 2020 — as the need for coming up with rules to “empower the ordinary users of digital platforms to seek redressal for their grievances and command accountability in case of infringement of their rights”.

The government had been working on these guidelines for over three years; however, the big push came in the form of the violent incidents at the Red Fort on 26 January, following which the government and Twitter were embroiled in a spat over the removal of certain accounts from the social media platform.

What are the key proposals that the guidelines make for social media?

Section 79 of the Information Technology Act provides a “safe harbour” to intermediaries that host user-generated content, and exempts them from liability for the actions of users if they adhere to government-prescribed guidelines.

The new guidelines notified on 25 February 2021 prescribe an element of due diligence to be followed by the intermediary, failing which the safe harbour provisions would cease to apply to these platforms such as Twitter, Facebook, YouTube, and WhatsApp.

They also prescribe a grievance redressal mechanism by mandating that the intermediaries, including social media platforms, should establish a mechanism for receiving and resolving complaints from users. These platforms will need to appoint a grievance officer to deal with such complaints, who must acknowledge the complaint within 24 hours, and resolve it within 15 days of receipt.

Do the guidelines lay the rules for removal of content from social media?

In essence, the rules lay down 10 categories of content that the social media platform should not host.

These include content that “threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order, or causes incitement to the commission of any cognizable offence or prevents investigation of any offence or is insulting any foreign States”; “is defamatory, obscene, pornographic, paedophilic, invasive of another’s privacy, including bodily privacy; insulting or harassing on the basis of gender; libellous, racially or ethnically objectionable; relating or encouraging money laundering or gambling, or otherwise inconsistent with or contrary to the laws of India”, etc.

The rules stipulate that upon receipt of information about the platform hosting prohibited content from a court or the appropriate government agency, it should remove the said content within 36 hours.

What does the due diligence entail for social media companies?

In addition to appointing a grievance officer, social media platforms will now be required to appoint a chief compliance officer resident in India, who will be responsible for ensuring compliance with the rules. They will be required also to appoint a nodal contact person for 24×7 coordination with law enforcement agencies.

Further, the platforms will need to publish a monthly compliance report mentioning the details of complaints received and action taken on the complaints, as well as details of contents removed proactively by the significant social media intermediary.

While the rules have been notified and will take effect from 25 February 2021, the due diligence requirements will come into effect after three months.

What are the penalties for companies violating these guidelines?

In case an intermediary fails to observe the rules, it would lose the safe harbour, and will be liable for punishment “under any law for the time being in force including the provisions of the IT Act and the Indian Penal Code”.

While the offences under the IT Act range from tampering with documentshacking into computer systemsonline misrepresentationconfidentiality, privacy and publication of content for fraudulent purposes, among others, the penal provisions vary from imprisonment for three years to a maximum of seven years, with fines starting from Rs 2 lakh.

For example, any person who tampers with, conceals, destroys, or alters any computer source intentionally, shall be liable to pay a penalty of up to Rs 2 lakh, along with simple imprisonment of three years, or both.

Under Section 66 of the IT Act, if a person, without the permission of the owner or any other person in charge of the computer or the computer network, damages the said properties, he would be liable to pay a penalty of up to Rs 5 lakh, or be jailed for up to three years or both.

Section 67 A of the IT Act carries provisions to fine and imprison persons transmitting “sexually explicit act, or conduct”. In the first instance, such persons shall be liable to pay a penalty up to Rs 10 lakh and face imprisonment up to five years, while in the second instance, the jail term will go up to seven years.

Executives of intermediaries which fail to act on an order issued by the government citing threat to sovereignty or integrity, defence, security of the state or public order, can be jailed for up to a period of seven years under Section 69 of the IT Act.

What is the current law in India with regard to data privacy on the Internet, and for social media users?

Although there are no specific provisions under the IT Act of 2000 that define privacy, or any penal provisions relating to privacy, some sections of the Act deal with very specific cases of data breaches and privacy.

For example, Section 43A provides for compensation if an intermediary is negligent in using reasonable and good quality security and safety parameters, which can protect the data of their users and citizens. Though this section says that companies must use “reasonable security practices and procedures”, the same is not defined in very clear terms and can be interpreted in various ways.

Section 72 of the IT Act has penal and imprisonment provisions if a government official in the course of his or her duty gets access to certain information, and leaks it subsequently.

Section 72A provides for criminal punishment if a service provider, during the course of providing the service or during the contract period, discloses personal information of the user without them being aware of it.

What do the rules for OTT services mean for consumers?

For OTT service providers such as YouTube, Netflix, etc., the government has prescribed self-classification of content into five categories based on age suitability.

Online curated content that is suitable for children and for people of all ages shall be classified as “U”, and content that is suitable for persons aged 7 years and older, and which can be viewed by a person under the age of 7 years with parental guidance, shall be classified as “U/A 7+” rating.

Content that is suitable for persons aged 13 years and above, and can be viewed by a person under the age of 13 years with parental guidance, shall be classified as “U/A 13+” rating; content which is suitable for persons aged 16 years and above, and can be viewed by a person under the age of 16 years with parental guidance, shall be classified as “U/A 16+” rating.

Online curated content which is restricted to adults shall be classified as “A” rating. Platforms would be required to implement parental locks for content classified as U/A 13+ or higher, and reliable age verification mechanisms for content that is classified as “A”.