Free speech in India? Not in 2012

From journalists murdered for chasing stories of illegal mining to exploding packages delivered to newspaper offices, India battled with a range of free expression and censorship issues in 2012, a report released this week by media watchdog The Hoot shows.

Harassment in the form of stone-throwing, physical assault and even bullets was meted out to journalists exposing the underbelly of India, especially when reporting on cases of deep corruption by politicians.

The arts also saw censorship in the form of cancelled shows due to objections of themes such as homosexuality, and the much-publicised cancelled visit of Salman Rushdie to the Jaipur Literary Festival due to “security concerns”.

Section 66A of the IT Act 2000 also made headlines when ordinary citizens were arrested for criticising politicians on social media platforms, leading to massive public outrage.

Read the full report here

 

More on this story:

Salil Tripathi on why India must choose to defend free speech

India’s tussle with internet freedom

The threat of colonial-era sedition laws

Mere conduit no more: Italian court threatens international web freedom

UPDATE: An appeals court in Milan acquitted today three Google executives of violating the privacy of an Italian boy with autism, in the so-called “Vividown” case. “We’re very happy that the verdict has been reversed and our colleagues’ names have been cleared. Of course, while we are delighted with the appeal, our thoughts continue to be with the family who have been through the ordeal,” said Giorgia Abeltino, Google Italy Policy Manager, in an statement.

The European Union Directive on electronic commerce is not the most inspirationally named document. The title would barely fit on a placard, and scans awkwardly for sloganeering (“What do we want?” “Implementation of the Electronic Commerce Directive!” “When do we want it?” “Within an agreed scheduled framework period, subject to negotiation between neighbour states and key stakeholders!”)

But the eCommerce Directive, as it is known by, er, some people, states a principle that is absolutely crucial to how the web works.

Article 12 of the directive, adopted in 2000, establishes the principle of the “mere conduit”. That is the idea that an Internet Service provider is not liable for content hosted on its platform, provided it “(a) does not initiate the transmission; (b) does not select the receiver of the transmission; and (c) does not select or modify the information contained in the transmission.”

This idea means that, at least in theory, Facebook, YouTube etc. can allow users to post anything on their platforms without worrying about having to account for it legally.

I say “in theory”. Today (21 December), an appeal will take place in an Italian court over a ruling which severely tested the concept of “mere conduit”.

In September 2006, Italian secondary school students posted a video of a boy with Down’s Syndrome being taunted and beaten by other teenagers. The video remained online until November that year, when it was removed by YouTube following a request by Italian police.

In 2010, three Google executives were found guilty of breach of privacy by an Italian court in a case brought by Down’s Syndrome charity Viva Down. The appeal comes to court on Friday.

Google protests that it acted as soon as it was notified by the authorities that the video may be illegal. Prosecutors claim that YouTube should have responded to private complaints sooner.

Videos of bullying are unpleasant to say the least, but the people responsible for the harassment of the boy, and the uploading of the video, have been convicted.

The 2010 conviction of Google employeees seriously breaches the idea of ISP as “mere conduit”, and with that, the way the web works. If social platforms are to be held responsible for all content, the consequences could be catastrophic for the way we operate on the web. Even the Chinese Internet police cannot pre-moderate every single piece of content uploaded, which is what ISPs may feel obliged to do should they be held responsible for content. The alternative might be an automated “banned words” list, perhaps. Either way, we would see an enormous escalation of censorship. What’s more, we would be establishing, even more than already exists, a system of privatised censorship. By handing over responsibility for what we say online from individuals to ISPs, we would be allowing private companies even more power than the state has to govern our speech.

Already this week there has been uproar over Instagram’s (attempted, then hastily withdrawn) grab for users’ content, itself perhaps a breach of mere conduit status.

And if this ruling is upheld in Italy, we’ll be facing another blow to individuals’ free use of the web. Already, a huge deal of our communication happens across private networks. If they are legally responsible for every word, picture and video, they will be inclined to caution, and our space to speak ever more narrowed.

Padraig Reidy is news editor at Index on Censorship

Social media guidelines: Nice start, but still a long way to go

Keir Starmer’s social media interim guidelines appear sensible enough, which is more than can be said for the controversial cases that led to the Director of Public Prosecutions’ consultation.

Index took part in that consultation back in October. I wrote at the time Starmer was adamant that the ruling in the Paul Chambers appeal (which overturned his 2010 conviction for jokingly tweeting that he would blow an airport “sky high”) was not to be seen as any sort of precedent. Yet in the guidelines published today, Starmer cites the two passages in that ruling that seemed to provide most protection for free speech, which noted:

…a message which does not create fear or apprehension in those to whom it is communicated, or may reasonably be expected to see it, falls outside [section 127(i)(a) of the Communications Act 2003], for the simple reason that the message lacks menace.

And:

Satirical, or iconoclastic, or rude comment, the expression of unpopular or unfashionable opinion about serious or trivial matters, banter or humour, even if distasteful to some or painful to those subjected to it should and no doubt will continue at their customary level, quite undiminished by [section 127].

So it would seem there’s been a slight change of mind, which is entirely reasonable and welcome (though on Twitter Chambers’ partner Sarah Tonner seems a little annoyed by this apparent switch).

Apart from that, what else have we got to discuss in these interim guidelines? Well, there’s a slight shift away from the use of the controversial section 127 of the Communications Act. At the consultation I attended, the various representatives, from diverse groups including anti-bullying and anti-harassment bodies, were keen to stress that section 127 was not appropriate for social media, and that it would be better to focus on patterns of harassment, abuse etc, and prosecute, if necessary, under anti-harassment laws such as the Protection from Harassment Act 1997. This is welcome – too often we focus on the medium rather than the behaviour.

More generally, there’s much on high thresholds on prosecution, and clear identification of public interest, perhaps not evident in the prosecutions of people such as Liam Stacey (sentenced to 56 days in prison for a “racially aggravated public order offence” after tweeting a poor taste joke about footballer Fabrice Muamba).

There is not much on the difference between “merely offensive”, which may not merit a prosecution, and “grossly offensive”, which could. As so often, this comes down to the probable perception of a right-thinking person. As in definitions of “obscenity” it seems a case of “I know it when I see it”.

There is a worry in the suggestion that removal of offensive posts by ISPs may provide a defence against prosecution.

While Facebook, Twitter et al will sometimes remove posts off their own bat, there is no absolute uniform system, and due to the sheer volume of traffic on social networks every day, some posts will slip through and others will be removed prematurely or inappropriately. Furthermore, this contains the germ of a suggestion of third-party liability, in which ISPs are held responsible for content. It will be crucial to examine this in the three-month public consultation on the guidelines which open today. It will also be worth examining whether section 127 of the Communications Act is appropriate at all in social media cases.

A decent start then, but more to be done.

Padraig Reidy is news editor at Index. Follow him on Twitter: @mepadraigreidy

More on this story:
Read the guidelines in full here
Graham Linehan on the Twitter Joke Trial
Paul Sinha on a tale of two tweets
Do western democracies protect free speech?