Index relies entirely on the support of donors and readers to do its work.
Help us keep amplifying censored voices today.
Facebook made headlines this week over allegations by former staff that the site tampers with its “what’s trending” algorithm to remove and suppress conservative viewpoints while giving priority to liberal causes.
The news isn’t likely to shock many people. Attempts to control social media activity have been rife since Facebook and Twitter launched in 2006. We are outraged when political leaders ban access to social media, or when users face arrest or the threat of violence for their posts. But it is less clear cut when social media companies remove content they deem in breach of their terms and conditions, or move to suspend or ban users they deem undesirable.
“Legally we have no right to be heard on these platforms, and that’s the problem,” Jillian C. York, director for international freedom of expression at the Electronic Frontier Foundation, tells Index on Censorship. “As social media companies become bigger and have an increasingly outsized influence in our lives, societies, businesses and even on journalism, we have to think outside of the law box.”
Transparency rather than regulation may be the answer.
Back in November 2015, York co-founded Online Censorship, a user-generated platform to document content takedowns on six social media platforms (Facebook, Twitter, Instagram, Flickr, Google+ and YouTube), to address how these sites moderate user-generated content and how free expression is affected online.
Online Censorship’s first report, released in March 2016, stated: “In the United States (where all of the companies covered in this report are headquartered), social media companies generally reserve the right to determine what content they will host, and they do not consider their policies to constitute censorship. We challenge this assertion, and examine how their policies (and the enforcement thereof) may have a chilling effect on freedom of expression.”
The report found that Facebook is by far the most censorious platform. Of 119 incidents, 25 were related to nudity and 16 were due to the user having a false name. Further down the list were content removed on grounds of hate speech (6 reports) and harassment (2).
“I’ve been talking with these companies for a long time, and Facebook is open to the conversation, even if they haven’t really budged on policies,” says York. If policies are to change and freedom of expression online strengthened, “we have to keep the pressure on companies and have a public conversation about what we want from social media”.
Critics of York’s point of view could say if we aren’t happy with the platform, we can always delete our accounts. But it may not be so easy.
Recently, York found herself banned from Facebook for sharing a breast cancer campaign. “Facebook has very discriminatory policies toward the female body and, as a result, we see a lot of takedowns around that kind of content,” she explains.
Even though York’s Facebook ban only lasted one day, it proved to be a major inconvenience. “I couldn’t use my Facebook page, but I also couldn’t use Spotify or comment on Huffington Post articles,” says York. “Facebook isn’t just a social media platform anymore, it’s essentially an authorisation key for half the web.”
For businesses or organisations that rely on social media on a daily basis, the consequences of a ban could be even greater.
Facebook can even influence elections and shape society. “Lebanon is a great example of this, because just about every political party harbours war criminals but only Hezbollah is banned from Facebook,” says York. “I’m not in favour of Hezbollah, but I’m also not in favour of its competitors, and what we have here is Facebook censors meddling in local politics.”
York’s colleague Matthew Stender, project strategist at Online Censorship, takes the point further. “When we’re seeing Facebook host presidential debates, and Mark Zuckerberg running around Beijing or sitting down with Angela Merkel, we know it isn’t just looking to fulfil a responsibility to its shareholders,” he tells Index on Censorship. “It’s taking a much stronger and more nuanced role in public life.”
It is for this reason that we should be concerned by content moderators. Worryingly, they often find themselves dealing with issues they have no expertise in. A lot of content takedown reported to Online Censorship is anti-terrorist content mistaken for terrorist content. “It potentially discourages those very people who are going to be speaking out against terrorism,” says York.
Facebook has 1.5 billion users, so small teams of poorly paid content moderators simply cannot give appropriate consideration to all flagged content against the secretive terms and conditions laid out by social media companies. The result is arbitrary and knee-jerk censorship.
“I have sympathy for the content moderators because they’re looking at this content in a split second and making a judgement very, very quickly as to whether it should remain up or not,” says York. “It’s a recipe for disaster as its completely not scalable and these people don’t have expertise on things like terrorism, and when they’re taking down.”
Content moderators — mainly based in Dublin, but often outsourced to places like the Philippines and Morocco — aren’t usually full-time staff, and so don’t have the same investment in the company. “What is to stop them from instituting their own biases in the content moderation practices?” asks York.
One development Online Censorship would like to see is Facebook making public its content moderation guidelines. In the meantime,the project will continue to strike at transparency by providing crowdsourced transparency to allow people to better understand what these platforms want from us.
These efforts are about getting users to rethink the relationship they have with social media platforms, say York. “Many treat these spaces as public, even though they are not and so it’s a very, very harsh awakening when they do experience a takedown for the first time.”