Index relies entirely on the support of donors and readers to do its work.
Help us keep amplifying censored voices today.
Today, the torrent of online information, misinformation and disinformation makes it harder than ever to stay in the loop. As we get bombarded with news from all angles, important stories can easily pass us by. To help you cut through the noise, every Friday Index publishes a weekly news roundup of some of the key stories covering censorship and free expression from the past seven days. This week, we look at more news from Donald Trump’s USA, yet another rapper having his music banned for criticising the powerful, and the announcement of a new uncensored social media network from former UK Prime Minister Liz Truss.
Deportations: Trump administration faces contempt ruling over ignoring Supreme Court order
US district judge Paula Xinis says she is considering instigating contempt proceedings against the Trump administration for failing to facilitate the return to the US of Salvadorean national Kilmar Armando Abrego Garcia, who was deported in March.
Garcia, originally from El Salvador but who entered the US illegally as a teenager, is one of tens of alleged members of the MS-13 and Tren de Aragua gangs who were flown on US military planes and detained in El Salvador’s notorious Cecot (Terrorism Confinement Centre) in March. Garcia’s lawyer denies he is a member of either gang.
Garcia’s deportation came despite an immigration judge’s 2019 order barring him from being sent to his home country. The US Government said he was taken there as the result of an “administrative error”.
On 11 April, the US Supreme Court ruled unanimously (9-0) that the Trump administration must “facilitate” Garcia’s release.
Trump advisor Stephen Miller has since portrayed the ruling as being unanimously in favour of the government. “We won a case 9-0, but people like CNN are portraying it as a loss,” he said. This is despite the Supreme Court declining to block the Maryland District Court ruling that the government should do everything in its power to facilitate his return. On a recent visit to the US, El Salvador’s President Nayib Bukele said he won’t release García because he isn’t fond of releasing people from his prisons, adding that he didn’t have “the power” to return him to the USA.
The New Yorker says the Trump administration has “slow-walked or outright failed to comply with court orders related to a range of issues, most notably immigration and government funding”.
Music censorship: Afrobeat track criticising Nigeria’s President banned
On 9 April, Nigeria’s National Broadcasting Commission banned the Afrobeat track Tell Your Papa from TV and radio.
Tell Your Papa was released three days earlier by the rapper Eedris Abdulkareem with lyrics in Nigerian Pidgin English and Yoruba. The song is aimed at Seyi Tinubu, the son of Nigeria’s President Bola Tinubu, calling on him to ask his father about his jet-setting lifestyle against a backdrop of worsening socio-economic conditions in the country.
Abdulkareem rose to prominence in the 1990s as a pioneer of Nigerian hip-hop as part of the group The Remedies.
Throughout his career, he has courted controversy with his music, attacking sexual harassment in Nigeria’s universities in the song Mr Lecturer and criticising corruption and poor governance by former President Olusegun Obasanjo on the 2004 album Jaga Jaga, the title track of which was banned.
Reporting curtailed: Families of exiled Belarusian journalists harassed
Belarusian dictator Alyaksandr Lukashenka has continued his crackdown on independent journalists in exile reporting on the country and its president from abroad.
The Belarusian Association of Journalists (BAJ), which was declared an extremist organisation and banned from operating in Belarus in 2023, has reported that security forces in the country have intensified pressure on journalists remaining in Belarus, as well as on the relatives of media workers forced into exile.
BAJ reports that security officers have visited the registered addresses of independent journalists who are currently working abroad. In some instances, these visits included searches of the premises in connection with criminal cases opened against the journalists.
In January, the United Nations criticised the country for the growing use of in-absentia trials – there were 110 people subjected to these trials in 2024 compared to 18 in 2023. BAJ says that a large number of media workers have become subjects of criminal investigations as a result.
Many Belarusian journalists have also been added to Russia’s wanted persons database at the request of the Belarusian authorities, according to Mediazona. The list includes Belsat TV channel director Alina Kovshik, Euroradio’s Maria Kolesnikova and Zmitser Lukashuk, and Radio Svaboda’s Dmitry Gurnevich and Oleg Gruzdilovich from Radio Svaboda.
Journalists under attack: Indigenous radio reporter intimidated after criticising Mexican road project
An Indigenous journalist and human rights defender has received intimidating messages and calls from local authorities in Mexico after she reported on a case of land dispossession that potentially involved one of the authority’s advisors.
Miryam Vargas Teutle is a Nahua Indigenous communicator from the Choluteca region of the country who works as a journalist for Cholollan Radio. In the radio show, Vargas highlighted the Bajadas del Periférico road construction project, which could affect the ancestral territories of the Tlaxcalancingo people and limit their access to water.
After the programme was aired, posts attempting to discredit her work appeared on Facebook and she allegedly started to receive intimidating WhatsApp messages and calls by staff of the municipality of San Andrés Cholula.
According to Vargas, the senders also threatened to restrict Cholollan Radio’s airtime.
Social media: Former UK Prime Minister Liz Truss to launch social network
The Conservative ex-Prime Minister Liz Truss, who succeeded Boris Johnson in 2022 until she resigned just over six weeks later, has said she wants to launch an “uncensored” social network to counter mainstream media.
Truss’s plans mirrors those of US President Donald Trump, who launched Truth Social in 2021 to provide a platform for “people of all political stripes, and all different viewpoints, to come and participate once again in the great American debate”.
Truss revealed the news at a cryptocurrency conference in Bedford last weekend. She said the UK needs a network that is “really demanding change of our leaders” and that issues were “suppressed or promoted” by the mainstream media – “the kind of thing that we used to see going on in the Soviet Union”.
This week, the global conversation was dominated by one word: tariffs. China was no exception, but not all conversations were allowed to unfold freely. On major Chinese social media platforms, searches for “tariff” and “104” (a numeric stand-in) led to dead ends, error messages or vanishing posts. It wasn’t silence across the board, though. Some conversations weren’t just permitted, they were actively promoted. State broadcaster CCTV pushed a hashtag that quickly went viral: #UShastradewarandaneggshortage. Meanwhile, posts encouraging Chinese alternatives to US goods saw a notable boost from platform algorithms.
To outsiders, this patchwork of censorship versus amplification might seem chaotic or contradictory. In reality, it follows a clear, strategic logic. China’s censorship system is built on a few core principles: block anything that goes viral and paints the government in a bad light, suppress content that risks sparking public anger or social unrest, and amplify posts that reflect well on the nation or state. At its heart, it’s about control – of the message, the momentum and the mood. “Saving face” isn’t just cultural etiquette in China, it’s political strategy.
Curiously, this is not only a top-down game. A significant driver of online sentiment today is cyber nationalism, a fast-growing trend where patriotic fervour, often fuelled by influencers, bloggers and grassroots communities, aligns with state objectives. Cyber nationalism is both tolerated and profitable. Pro-nationalist influencers can rake in millions in ad revenue and merchandise sales. The state, in turn, benefits from a wave of popular support that looks organic, and is, to a degree. But there are limits. These nationalist fires are only allowed to burn within a safe perimeter.
When it comes to the trade war, China’s censors are turning “crisis” into “opportunity”, wrote Manya Koetse on What’s On Weibo. Unless there’s a u-turn, the outlook for many Chinese people could darken – except if you’re employed as part of the booming censorship industry. That said, even there job security isn’t guaranteed: in another example of politics aligning with profit, online censorship is increasingly automated through AI. So while Washington and Beijing trade blows, China’s digital censors are aiding the government line – and scaling it too.
PS. if you want more on the inner workings of Chinese censors, read this excellent article from two years ago about how local TV stations air stories on government corruption in a way that ultimately benefits the government.
In recent months, several young men and women in Uganda have been arrested and charged for views they expressed on TikTok.
In the East African country, the freedom of expression landscape has deteriorated to the extent that one cannot hold a placard and march anywhere in support of a cause or in protest against an injustice. You will be roughed up by the Uganda Police Force, bundled into a police van, locked up in a cell and charged with the colonial-era “common nuisance” offence that the government uses to crush demonstrations.
Consequently, people with critical views turn to social media platforms like TikTok and X. Facebook is not available in Uganda – it was banned in January 2021 after the platform pulled down hundreds of pages that were linked to the government and thought to be fake. Facebook said that it acted after an investigation showed the accounts were attempting to influence the January 2021 presidential elections in favour of the incumbent, Yoweri Museveni, who has ruled Uganda since January 1986.
But even on TikTok or X, which are still allowed in the country, there is the likelihood that you will be arrested for expressing views considered offensive (particularly towards members of the first family – the family of the president) or deemed hateful (usually to members of Museveni’s sub-tribe or tribe, Banyankole, who hold many top positions across multiple sectors in the country).
Those recently arrested and charged include 21-year-old David Ssengozi (alias Lucky Choice), 28-year-old Isaiah Ssekagiri and 19-year-old Julius Tayebwa, all charged in November 2024 with hate speech and spreading malicious information against the first family. They are now awaiting trial.
There are more, although reporting is sparse. Instead, TikTokers themselves cover each other’s cases. Agora Discourse, a platform holding the Ugandan authorities to account, gave Index a list of those who have been charged. They include 26-year-old Muganga Fred, 19-year-old Wasswa Noah (alias Sturbon Josh) and Passy Mbabazi, a member of the National Unity Platform (NUP), the leading opposition party in the country, all charged with hate speech against either Museveni, his family or party members.
Except for Mbabazi’s case, which is ongoing at Bushenyi Magistrate’s Court, Western Uganda, the rest of the cases are being tried at Entebbe Chief Magistrates’ Court in Central Uganda, under one magistrate, Stella Maris Amabilis, who has already found two TikTokers guilty as charged and sentenced them to jail terms.
One of these is 21-year-old Emmanuel Nabugodi, who received a 32-month sentence on 18 November 2024 for hate speech and spreading malicious information about President Museveni for a comedy video in which he held a mock trial of the long-ruling soldier and politician, whom he found guilty and sentenced to a public flogging.
The other is 24-year-old Edward Awebwa, who received a six-year jail term for demeaning President Museveni, his wife Janet Museveni and his son General Muhoozi Kainerugaba. These two convictions make Amabilis, the magistrate, predictable – it is likely that the rest of the TikTokers being tried by her will be found guilty as well.
At least three patterns arise from the above arrests and charges (and, in two occasions, prison sentences). First, Museveni and his family members are the offended parties – the untouchables against whom nobody dares raise a voice. This makes the charges politically motivated with their sole aim being to crush dissent.
Second, the commonly preferred charges are hate speech and spreading malicious information about the people in the ruling party, under the notorious Computer Misuse Act (as amended in 2022).
Finally, most of those being criminally prosecuted are young people, mainly in their twenties.
It is nothing new for critical voices posting online to suffer prosecution in Uganda. Take Stella Nyanzi, an academic, poet and politician, and Kakwenza Rukirabashaija, a novelist, memoirist and lawyer. The former was jailed in 2019 for 18 months for writing a poem on Facebook suggesting that Museveni should have died at childbirth to save Uganda from tyranny. The latter was kidnapped, detained and has described how he was tortured in December 2022, when he wrote on Twitter (now X) that Museveni’s son Kainerugaba was “obese” and a “curmudgeon” and that the Musevenis had “imposed enormous suffering on this country [Uganda]”.
These arrests and prosecutions usually target voices critical of the ruling party. The people who use their social media accounts to express views critical of opposition politicians do not face arrest or prosecution.
According to Godwin Toko, a lawyer and human rights activist, and a member of the Network of Public Interest Lawyers (NETPIL), the crackdown on TikTokers is meant to entrench a culture of silence, unaccountability and untouchability by instilling fear in Ugandans so that they do not participate in public debates that call for better governance.
“Generally, freedom of expression is the bedrock for other freedoms. Without it, other freedoms are hard to guarantee. This has greatly impacted our democracy as people aren’t able to hold leaders accountable,” Toko told Index, calling on Ugandans to “boldly, fearlessly and persistently hold their leaders accountable by using any means necessary to safeguard and further freedom of expression”.
Toko is one of the founding members of Agora, a digital public square spotlighting mismanagement of public resources, be it roads that are potholed, hospitals that are not adequately staffed and stocked, or public institutions, like Parliament, that are corrupt.
The platform was founded after it became impossible for Ugandans to hold peaceful protests after the Public Order Management Act came into force in 2013. The Uganda Police Force has been criticised for misinterpreting the law and shutting down any form of demonstration, as shown by how brutally they arrest anybody who attempts to hold a placard in support of a cause or in castigation of an injustice.
But members loyal to the ruling party are allowed to hold demonstrations of any kind, whenever they wish to. These double standards common among Uganda’s ruling elite are what make TikTokers and freedom of expression activists loud in their condemnation of Museveni, his family members, and his ardent lieutenants.
Unfortunately, this comes at a heavy cost – brutal arrests, drawn-out judicial trials and potentially long prison sentences – to which jailed TikTokers Nabugodi and Awebwa, among others, can attest.
Mark Zuckerberg’s announcement this week of changes to Meta’s content moderation policies appeared to primarily be about building trust. Trust among users. Trust among investors. And trust among the incoming Trump administration. “It’s time to get back to our roots around free expression,” Zuckerberg said in his announcement.
While we applaud anything that is generally trying to embolden free expression, will these moves actually do that? We break it down –
In the USA, Meta is abandoning the use of independent fact checkers on its platforms (Facebook, Instagram and Threads) and replacing them with X-style “community notes”, where commenting on the accuracy or veracity of posts is left to users. But fact checks by dedicated fact-checking organisations do not work against free expression. As a rule they do not remove, override or stifle existing content. Instead they challenge it and contextualise it. As tech expert Mike Masnick wrote after the announcement: “Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is.”
On the flipside, as Masnick also points out, professional fact checkers are not always effective. The “people who wanted to believe false things weren’t being convinced by a fact check (and, indeed, started to falsely claim that fact checkers themselves were ‘biased’),” he writes. The notion of “bias” was referenced by Zuckerberg himself, who accused fact-checkers of this.
While the set-up that existed until now has been imperfect, are proposed community notes any better? This is complicated. and there is little evidence to suggest they work to the extent that Zuckerberg claims. Community notes tend to be effective for issues on which there is consensus, because there must be agreement before a note can be added to a post. This means that misleading posts on politically divisive subjects often go unchecked, while some accurate posts can be flagged as untrue if enough people determine it that way. According to MediaWise, a media literacy programme at the Poynter Institute, only about 4% of drafted community notes about abortion and 6% of those on immigration were made public on X.
There is also a big difference between those who are paid (and qualified) to fact-check versus non-professionals and this can be evident in the very logistics. According to X, “in the first few days of the Israel-Hamas conflict, notes appeared at a median time of just five hours after posts were created.” In the online world, where a post can go viral within minutes, hours is a long time, arguably too long.
In addition to getting rid of dedicated fact-checkers, Meta is dialling back its content moderation teams and reducing reliance on filters. The move away from automated content moderation processes is to be welcomed. Due to the complexity of speech and online content sharing – with languages and communities evolving slang, colloquialisms and specific terminology – and the ambiguity over imagery, automated processes do not retain the contextual details or complexity necessary to make consistent and informed decisions.
Mis- and disinformation are problematic standards for content removal too. For instance, satire is commonly presented as fact when obviously false and this a central tenet of protected speech across the globe. Simply removing all posts that are deemed to contain misinformation is not and has not worked.
What is more, censoring misinformation does not address the root cause; removing fake news only temporarily silences those that spread it. It doesn’t demonstrate why the information they are spreading is inaccurate. It may even end up giving conspiracy theorists more reason to believe in their theories by feeling that they are being denied access to information. It can end up undermining trust.
Content moderation isn’t just about removing perceived or real misinformation. It is also about removing posts that propagate hate and/or incite violence. Like with misinformation these have to date been imperfectly applied – sweeping up legal speech and missing illegal speech. Algorithms are ultimately imperfect. They miss nuance and this has had a negative impact on speech across Meta platforms.
It is right for Meta to review these policies as they have too often, to date, failed the free speech test.
Still, in scaling filters back – rather than addressing how to improve them – it does run the risk of allowing a lot more bad content in. Zuckerberg, by his own admission, says that the newly introduced measures are “a “trade-off”. “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
The flipside of catching “less bad stuff” can be, ironically, less free speech. Harassment can drive people to silence themselves or leave online spaces entirely. This form of censorship (self-censorship) is insidious and cannot be easily measured. Unchecked it can also lead to some of the gravest attacks onto human rights. In 2022 Amnesty issued a report looking into Meta’s role in the Rohingya genocide. It detailed “how Meta knew or should have known that Facebook’s algorithmic systems were supercharging the spread of harmful anti-Rohingya content in Myanmar, but the company still failed to act”.
Following Zuckerberg’s announcement, Helle Thorning-Schmidt, from Meta’s oversight board, said: “We are seeing many instances where hate speech can lead to real-life harm.” She raised concerns about the potential impact on the LGBTQ+ community as just one community.
Another damning response came from Maria Ressa, Rappler CEO and Nobel Peace Prize winner:
“Journalists have a set of standards and ethics. What Facebook is going to do is get rid of that and then allow lies, anger, fear and hate to infect every single person on the platform.”
Finally, Zuckerberg said the remaining content moderation teams will be moved from California to Texas where, he said, “there is less concern about the bias of our teams”. As pointed out by many, including the Electronic Frontier Foundation, there is no evidence that Texas is less biased than California. Due to the political leadership of Texas and the positioning of this state and the perception that it is more closely allied with the incoming administration, there are real concerns that this is replacing one set of perceived biases with another. Instead, a free-speech first approach would be to address what biases exist and how current teams can overcome them, irrespective of geographical location. Establishing a process based on international human rights and free expression standards would be a step in the right direction.
In Zuckerberg’s announcement he stated “we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far.”
Simplifying the policies can increase their efficacy, with users clearer as to the standards employed on the platforms. However, suggesting that policies must move with “mainstream discourse” is a challenging threshold to maintain and could embed uncertainty into how Meta responds to the ever-changing and complex speech environment. Identifying topics such as immigration and gender threatens to define such thresholds by the contentious topics of the day and not objective standards or principles for free expression.
It could also open the floodgates to a lot of genuine hate speech and incitement, which will be incredibly damaging for many individuals and communities – in general and in terms of free speech.
In Zuckerberg’s speech he took issue with foreign interference. Platforms and governments have often collided over their interpretations of what is acceptable content and who has the power to decide. Ideally we’d have standardised community guidelines and rules of moderation in line with international human rights law. In practise this is not the case. Except instead of highlighting countries where the human rights record is woeful and content removal requests have been clearly politically motivated, Zuckerberg cited Latin America and Europe here. Article19 said they were “puzzled by Mark Zuckerberg’s assertion that Europe has enacted an ‘ever-increasing number of laws institutionalizing censorship’” and that it showed “misunderstanding”.
Parking a discussion of EU laws, it was certainly disappointing for the reasons stated above. As reported by the Carnegie Center in 2024: “In illiberal and/or autocratic contexts, from Türkiye to Vietnam, governments have exploited the international debate over platform regulation to coerce tech companies to censor—rather than moderate—content.” That is where we need to be having a conversation.
Countries such as India have demonstrated processes by which political pressure can be exerted over content moderation decisions undertaken by social media platforms. According to the Washington Post, the Indian government has expanded its pressure on X: “Where officials had once asked for a handful of tweets to be removed at each meeting, they now insisted that entire accounts be taken down, and numbers were running in the hundreds. Executives who refused the government’s demands could now be jailed, their companies expelled from the Indian market.” Further in the piece, it states: “Records published by the Indian Parliament show that annual takedown requests for posts and accounts increased from 471 to 6,775 between 2014 and 2022, with those to Twitter soaring from 224 in 2018 to 3,417 in 2022.”
Zuckerberg’s announcement was silent on how Meta would respond to or resist such explicit state censorship in countries with weak and eroding democratic norms and standards.
For now Meta says it has “no immediate plans” to get rid of its third-party fact checkers in the UK or the EU, nor could it necessarily do so because of the legal landscape. Some countries also have outright bans on Meta’s platforms, like China. So this is a story that will play out primarily in the USA.
Still, it is part of a broader pattern of Silicon Valley executives misusing the label “free speech” and the timing of it suggests the motivation is for political gain. Even incoming president Donald Trump acknowledged that this week. The shift towards kowtowing to one party and one person, which we have seen occur on other platforms, is incredibly worrying. As Emily Maitlis said on the News Agents this week when evaluating the announcement: “There is a king on the top here and there are courtiers and they recognise that their position is in terms of how they respond to the king now”.
Whether the platforms are used for sharing pictures of your family or galvanising support for a campaign, we know the powerful and central role social media plays in our lives. Furthermore, according to a 2022 OECD report, around four out of 10 respondents said they did not trust the news media, and more and more people were turning to social media for their news, especially young people. As a result it’s essential that social media lands in a helpful place. Content moderation policies at scale are incredibly difficult and cumbersome. They are impossible to do perfectly and easy to do badly. Still, we have little faith that these changes will be helpful and concerns that they could be hurtful.
We will continue to monitor the situation closely. In the meantime, please do support organisations like Index who are genuinely dedicated to the fight against censorship and the fight for free expression.