Looking forward: Challenges facing online speech regulation in India

In India, the largest practical exercise in electoral politics the world has ever seen has just come to an end. Narendra Modi and his BJP party has been returned to power for an unprecedented third term, although without an outright majority. While there are many priorities facing the new administration, one of them will undoubtedly be modernising India’s outdated online regulatory framework.

The growth of internet access in India has been exponential. According to the Ministry of Electronics and Information Technology (MeitY), in 2000 5.5 million Indians were online; last year that number was 850 million. To look at India’s increasing economic and geopolitical clout is to also see a country willing to take on the tech giants to control India’s image online. The Indian government has not tiptoed around calling for platforms such as X and YouTube to remove content or accounts. According to the Washington Post, “records published by the Indian Parliament show that annual takedown requests for posts and accounts increased from 471 to 6,775 between 2014 and 2022, with those to Twitter soaring from 224 in 2018 to 3,417 in 2022.”

India’s online regulatory regime is over 20 years old and with the proliferation of online users and the emergence of new technologies, its age is starting to show. India is not alone in wrestling with this complex issue – just look at the Online Safety Act in the UK, the Digital Services Act (DSA) for the EU, as well as the ongoing discussions around Section 230 of the Communications Decency Act in the USA. Following the election, the current government has confirmed its intention to update and expand the regulation of online platforms, through the ambitious Digital India Act (DIA).

The DIA is intended to plug the regulatory gap and while the need is apparent, the devil will be in the detail. MeitY has stated that while the internet has empowered citizens, it has “created challenges in the form of user harm; ambiguity in user rights; security; women & child safety; organised information wars, radicalisation and circulation of hate speech; misinformation and fake news; unfair trade practices”. The government has hosted two consultations on the Bill and they reveal the sheer scale of the Indian government’s vision, covering everything from online harms and content moderation to artificial intelligence and the digitalisation of the government.

Protections against liability for internet intermediaries hosting content on their platforms – often called Safe Harbour – has long defined the global discussions around online free expression and this is a live question hanging over the DIA. During an early consultation on the Bill held in the southern city of Bengaluru, Minister of State for Information Technology Rajeev Chandrasekhar posed the question:

“If there is a need for safe harbour, who should be entitled to it? The whole logic of safe harbour is that platforms have absolutely no power or control over the content that some other consumer creates on the platform. But, in this day and age, is that really necessary? Is that safe harbour required?”

What would online speech policy look like without safe harbour provisions? It could herald in the near total privatisation of censorship, with platforms having to proactively and expansively police content to avoid liability. This is why the European safe harbour provisions included in the EU eCommerce Directive were left untouched during the negotiations around the DSA. With the Indian government highlighting the importance of the DIA in addressing the growing power of tech giants like Google and Meta, with Chandrasekhar stating in 2024 that “[t]he asymmetry needs to be legislated, or at the very least, regulated through rules of new legislation”, gifting tech companies power to decide what can and can’t be published online would surely represent an alarming recalibration that appears to run at odds with the Bill’s stated aims.

The changing approach to online expression is also evidenced in the slides used by the minister during the 2023 Bengaluru consultation. For instance, the internet of 2000 was defined as a “Space for good – allowing citizens to interact” and a “Source of Information and News”. But for MeitY, in 2023 it has curdled somewhat into a “Space for criminalities and illegalities” and a space defined by the “Proliferation of Hate Speech, Disinformation and Fake news.” This shift in perception also frames how the government identifies potential online harms. During the consultation, the minister stated that “[t]he idea of the Act is that what is currently legal but harmful is made illegal and harmful.” A number of harms were included in the minister’s presentation, highlighting everything from catfishing and doxxing, to the “weaponisation of disinformation in the name of free speech” and cyber-fraud tactics such as salami-slicing. This covers a universe of harms that each would require distinct and tailored responses and so questions remain as to how the DIA can adequately address all these factors, without adversely affecting internet users’ fundamental rights.

As a draft bill is yet to be published, there is no way of knowing what harms the DIA will contain. Without this, speculation has filled the vacuum. To illustrate this point, the Internet Freedom Foundation has compiled an expansive list of what the Bill could regulate collated solely from media coverage of the Bill from July 2022 until June 2023. This included everything from “apps that have addictive impact” and online gaming to deliberate misinformation and religious incitement material. What is also shrouded in darkness so far is how platforms or the state are expected to respond to these harms. As we have seen in the UK and across Europe, without clarity, full civil society engagement, and a robust rights framework, work to address online harms can significantly impact our right to free expression.

For now, the scope and scale of the government’s ambition can only be guessed at. For Index, the central question is, how can this be done while protecting the fundamental right of free expression, as outlined in Article 19 of the Indian Constitution and international human rights law? This is an issue of significant importance for everyone in India.

This is why Index on Censorship is kicking off a project to support Indian civil society engagement with the DIA to ensure it is informed by the experiences of internet users across the country, can respond to the learnings from other jurisdictions legislating on the same challenges and can adequately protect free expression. We will be engaging with key stakeholders prior to and during the consultation process to ensure that everyone’s right to speak out and speak up online, on whichever platform they choose, is protected.

If you are interested to learn more about this work please contact [email protected]  

Last year, we published an issue of Index dedicated to issues related to free expression in India. Read it here.

How artificial intelligence is influencing elections in India

It has less than six months since Divyendra Singh Jadoun, the 31-year-old founder of an artificial intelligence (AI) powered synthetic media company, started making content for political parties in India. Within this short time he has risen to be known as the “Indian Deepfaker” as several political parties across the ideological spectrum reach out to him for digital campaigning.

Jadoun’s meteoric rise has a lot to do with the fact that close to a billion people are voting in India’s elections, the longest and largest in the world, which started last month. He says he doesn’t know of a single political party that hasn’t sought him out to enhance their outreach. “They [political parties] don’t reach out to us directly, though. Their PR agencies and political consultants ask us to make content for them,” said Jadoun, who runs the AI firm Polymath, based in a small town known for its temples in the north Indian state of Rajasthan and which has nine employees.

In India’s fiercely divided election landscape, AI has emerged as a newfound fascination, particularly as the right-wing ruling Bharatiya Janata Party (BJP) vies for an unusual third consecutive term. The apprehension surrounding technology’s capabilities in a nation plagued by misinformation has raised concerns among experts.

Jadoun says his team has been asked many times to produce content which they find highly unethical. He has been asked to fabricate audio recordings that show rival candidates making embarrassing mistakes during their speeches or to overlay opponents’ faces onto explicit images.

“A lot of the content political parties or their agents ask us to make is on these lines, so we have to say no to a lot of work,” Jadoun told Index on Censorship.

Certain campaign teams have even sought subpar counterfeit videos from Jadoun, featuring their own candidate, which they intend to deploy to discredit any potentially damaging authentic footage that surfaces during the election period.

“We refuse all such requests. But I am not sure if every agency will have such filters, so we do see a lot of misuse of technology in these elections,” he says.

“What we offer is simply replacing the traditional methods of campaigning by using AI. For example, if a leader wants to shoot a video to reach out to each and every one of his party members, it will take a lot of time. So we use some parts of deep-fakes to create personalised messages for their party members or cadres,” Jadoun adds.

Pervasive use

India’s elections are deeply polarised and the ruling right-wing BJP has employed a vicious anti-minority campaign to win over the majority Hindu voters- who roughly form 80% of the electorate. The surge in use of AI reflects both its potential and the concerns, amidst widespread misinformation. A survey by cybersecurity firm McAfee, taken last year, found that over 75% of Indian internet users have encountered various types of deepfake content while online.

Some of the most disturbing content features various dead politicians have been resurrected through AI to sway voters. Earlier this year, regional All India Anna Dravida Munnetra Kazhagam Party’s (AIADMK) official account shared an audio clip featuring a virtual rendition of Jayalalithaa, a revered Tamil political figure who passed away in 2016. In the speech, her AI avatar aimed to inspire young party members, advocating for the party’s return to power and endorsing current candidates for the 2024 general elections.

Jayalalithaa’s AI resurrection is not an isolated case.

In another instance, just four days prior to the start of India’s general election, a doctored video appeared on Instagram featuring the late Indian politician H Vasanthakumar. In the video, Vasanthakumar voices support for his son Vijay Vasanth, a sitting Member of Parliament who is contesting the election in his father’s erstwhile constituency.

The ruling Bharatiya Janata Party (BJP), known for its use of technology to polarise voters, has also shared a montage showcasing Prime Minister Modi’s accomplishments on its verified Instagram profile. The montage featured the synthesized voice of the late Indian singer Mahendra Kapoor, generated using AI.

Troll accounts subscribing to the ideology of different political parties are also employing AI and deepfakes to create narratives and counter-narratives. Bollywood star Ranveer Singh in a tweet last month cautioned his followers to be vigilant against deepfakes as a manipulated video circulated on social media platforms, where Singh appeared to criticise Modi. Using an AI-generated voice clone, the altered video falsely portrayed Singh lambasting Modi over issues of unemployment and inflation, and advocating for citizens to support the main opposition party, the Indian National Congress (INC). In reality, he had praised Modi in the original video.

“AI has permeated mainstream politics in India,” said Sanyukta Dharmadhikari – deputy editor of Logically Facts, who leads a team of seven members to fact-check misinformation in different vernacular languages.

Dharmadhikari says that countering disinformation or misinformation becomes extremely difficult in an election scenario as false information consistently spreads more rapidly than fact-checks, particularly when it aligns with a voter’s confirmation bias. “If you believe a certain politician is capable of a certain action, a deepfake portraying them in such a scenario can significantly hinder fact-checking efforts to dispel that misinformation,” she told Index on Censorship.

Selective curbs

Amidst growing concerns, the Indian government rushed to regulate AI by asking tech companies to obtain approval before releasing new tools, just a month before elections. This is a substantial shift from its earlier position when it informed Indian Parliament of not interfering in how AI is being used in the country. Critics argue that the move might be another attempt to selectively weigh down on opposition and limit freedom of expression. The Modi government has been widely accused of abusing central agencies to target the opposition while overlooking allegations involving its own leaders or that of its coalition partners.

“There needs to be a political will to effectively regulate AI, which seems amiss,” says Dharmadhikari. “Even though the Information Ministry at first seemed concerned at the misuse of deepfakes, but gradually we have seen they have expressed no concerns about their dissemination especially if something is helping [PM] Modi,” she added.

Chaitanya Rohilla, a lawyer based in Delhi, who initiated a Public Interest Litigation (PIL) at the Delhi High Court concerning the unregulated use of AI and deepfakes in the country believes that as technology unfolds at breakneck speed, the need for robust legal frameworks to safeguard against AI’s emerging threats is more pressing than ever.

“The government is saying that we are working on it…We are working on rules to bring about or to specifically target these deepfakes. But the problem is the pace at which the government is working, it is actually not in consonance with how the technology is changing,” Rohilla told Index on Censorship.

Rohilla’s PIL had requested the judiciary to restrict access to websites that produce deepfakes. The proposal suggested that such websites should be mandated to label AI-generated content and be prohibited from generating illicit material.

But Indian courts have refused to intervene.

“The information Technology Act that we have in our country is not suitable; it’s not competent to handle how dynamically the AI environment is changing. So as the system is unchecked and unregulated it (deepfake dissemination) would just keep on happening and happening.”

India’s hate speech trackers are being blocked

In January this year, when Raqib Hameed Naik received a notice from X (formerly Twitter) that Hindutva Watch was blocked on the platform by order of the country’s ruling Hindu–nationalist Bharatiya Janata Party (BJP), he was not surprised. The government had submitted more than 28 legal requests to X in the past two years, seeking removal of Watch’s posts. As well as the X account being blocked in India, the Hindutva Watch website was, and is, also inaccessible in the country.

“While shocking, it’s not surprising, considering Prime Minister Modi regime’s history of suppressing free press & critical voices,” Naik wrote on X on 16 January in reaction to the ban.

Naik, a journalist reporting on conflict and marginalisation of minorities, founded and runs US-based independent research project Hindutva Watch, which tracks hate crimes by right–wing Hindus against Muslims, Christians and members of the historically oppressed castes in India. The website of India Hate Lab, another initiative by Naik that is exclusively dedicated to tracking hate speech in India, has also been rendered inaccessible in the country.

Various law enforcement agencies have frequently attempted to erase Hindutva Watch and India Hate Lab’s documentation of hate crime and speech towards minorities, primarily on the pretext of violating India’s controversial Information Technology (IT) Act 2000, though the government never clarified which specific laws were violated by the two websites. 

The IT Act grants authorities the power to block access to information under the guise of safeguarding India’s “sovereignty, integrity, and security”. In 2022, the country’s Supreme Court invalidated a provision of the Act which empowered the government to prosecute individuals for sharing “offensive” messages online. Various governments, irrespective of political affiliations, had misused the provision to detain ordinary civilians critical of the government.

Hailing from the conflict-ridden India-administered Jammu and Kashmir, Naik started working as a journalist in 2014. He said it was evident for him from the beginning of his career that the government was vindictive against journalists and media outlets who reported critically on them, especially from sensitive areas like Kashmir. The situation, he believes, is worse for journalists from minority communities.

“The assault on the pillars of free press, coupled with the anti–minority policies and generation of an atmosphere grounded in hate and violence towards minority communities, profoundly affected me, as a Kashmiri Muslim journalist,” he told Index.

Naik’s fears were not unfounded. Initially covering political conflict and human rights in Kashmir and later on religious minorities and Hindu nationalism in India, he was among the handful of journalists who was able to report on the revocation of Kashmir’s autonomy by the BJP in 2019 amidst a tight curfew and communication blockade, including internet shutdowns for several months in the Himalayan valley.

As Naik’s reporting on Kashmir’s unrest gained international recognition, it also landed him in trouble. He faced questioning from the country’s intelligence officers and frequent inquiries from the police about his whereabouts and work, putting pressure on his family.

“It’s pure harassment but also a debilitating feeling,” said Naik, who is currently in the USA, since fleeing India in 2020 as the death threats and harassment for his reporting ramped up.

“Four years have passed, and I haven’t been home since. The thought of being unable to go home indefinitely just breaks my heart,” he said. “But then, I have to gather strength because there are very few journalists left in the country to tell and humanise stories of the minorities.”

Initially running the two websites and their X accounts anonymously from Massachusetts in the USA, Naik created a unique and robust digital database of human rights abuses, which routinely occur from big cities to remote villages in what is considered the world’s largest democracy. Yet, such cases do not receive adequate mainstream press coverage in India. Interestingly, two news outlets from the country, Hindustan Times and IndiaSpend, made attempts to monitor hate crimes only to stop their operations in 2017 and 2019 respectively. 

Modi’s tenure so far has been marred with an increased suppression of dissent, targeting critics such as journalists, activists, academics, lawmakers and minority communities in India. Shortly before Hindutva Watch and India Hate Lab were blocked, Modi presided over the consecration of the new Ram temple, constructed on the ruins of a historic Mughal–era mosque which was demolished by a mob in 1992. The ensuing riots took over 2,000 lives while the site remained a point of contention for over three decades.

According to IndiaSpend’s Citizen’s Religious Hate Crime Watch, a data–driven news platform, around 90% of religiously-motivated hate crimes that have occurred since 2009 did so after the BJP took power on a national level in 2014.

The scale of these hate crimes remains obscured. After 2017, the country’s national crime bureau stopped keeping a separate record on hate crimes or lynching. Naik’s project keeps track of such incidents in absence of any documentation by authorities or media in the country. Run by a small group of 12 volunteers spread across five countries, Naik’s project documents two to four hate events daily using video and picture evidence submitted by a network of Indian activists and citizens.

“Since 2021, we have documented and archived thousands of videos and stories on hate crimes and hate speeches,” Naik said. “Our efforts have led to actionable outcomes in states where law agencies were willing to take a stand against right–wing members involved in such activities.”

“What we have collected serves as the evidence for facilitating judicial intervention, particularly in cases related to hate speech,” he added.

According to the latest report released by India Hate Lab before it was blocked in India, nearly two anti–Muslim hate speech events took place every day in 2023 and around 75% of those occurred in states ruled by the BJP. Collating a total of 668 hate speech events, the report observed that the cases peaked between August and November in 2023 – the period of political campaigning and polling in four major states in the country.

In India, press freedom also took a severe plunge under Modi’s leadership and with people now heading to the polls, Naik worries that blocking of his websites could tighten the government’s grip on the information ecosystem in the country.

Despite the suspension, he remains undeterred in continuing his work. He says: “There is extreme fear. And the climate of fear may continue to stifle reporting. But I know there are journalists who won’t succumb or surrender. I see hope in them.”

Read more about how authorities are silencing their critics across borders in the upcoming issue of Index. For a 50% discounted subscription to our digital edition, visit our page on Exact Editions and use the code Spring24 here