Index relies entirely on the support of donors and readers to do its work.
Help us keep amplifying censored voices today.
Cast your mind back to January 2023, and the “world-leading, world-first Online Safety Bill” (Rishi Sunak responding to Labour’s Alex Davies-Jones) faced a significant backbench rebellion over an executive liability clause.
When the Bill landed in the House of Lords days later, a precarious agreement between Government and rebels had passed on a vast baton of legislative issues. There was a collective sigh of relief that the upper chamber would be taking on the mantle.
The threat to encryption, or private messaging, didn’t even feature as a concern amongst legislators, let alone the government, despite the Bill introducing measures unprecedented in any western democracy.
Flash forward to September, and encryption features as the most important and urgent issue that needs addressing before the Online Safety Act receives imminent Royal Assent.
The efforts of my colleagues at Index on Censorship, partners across civil society, and the businesses that rely on encryption have all been vital in achieving this.
Confidence in the Government’s ability to grasp the full consequences and details of their legislation has waned thin. Index and others have consistently warned that Section 122 of the Act is a gateway to the unprecedented mass-surveillance of British citizens and a threat to vulnerable people up and down the country.
As Index on Censorship’s report with Matthew Ryder KC set out:
Section 122 notices install the right to impose technologies that would intercept and scan private communications on a mass scale. The principle that the state can mandate the surveillance of millions of lawful users of private messaging apps should require a higher threshold of justification which has not been established to date.
Ofcom could impose surveillance on all private messaging users with a notice, underpinned by significant financial penalties, with less legal protections than equivalent powers under the Invetsigatory Powers Act.
The proposed interferences with the rights of UK citizens arising from surveillance under the Bill are unlikely to be in accordance with the law and are open to legal challenge.
Journalists will not be properly protected from state surveillance, risking source confidentiality and endangering human rights defenders and vulnerable communities.
From raising awareness of encryption in public debate, demonstrating its real-world effects for policy makers, to highlighting the unintended legal and technological consequences of the Bill, we finally have a Government that is at least not running head first into an attack on encryption that would be unprecedented in any democracy.
But the encryption die remains far from cast. Reports in the FT and elsewhere alluded to a Government ‘u-turn’ ahead of a Ministerial statement on Wednesday (6 September) that delivered nothing of the sort.
While some in the Government are briefing that encryption will be protected, the actions of its ministers do not match up to those words.
A new report by Index on Censorship this week revealed that that Online Safety Bill has alarming consequences when put alongside the controversial Investigatory Powers Act (snooper’s charter). This access, unprecedented in any Western democracy, could provide the Home Office with entry to British citizens’ personal messages as follows:.
Ofcom issues notice mandating the use of Accredited Technology to provide a backdoor to encrypted messages under the Online Safety Bill (section 122)
The Home Office or security services apply for a bulk surveillance warrant on account of a matter of national security (Investigatory Powers Act) granting them access to bulk data
This is extremely concerning, not least because the window in which the Government can legislate its way out of this mess is rapidly closing. The Online Safety Bill will return to the House of Commons for the first time in eight months on Monday (11 September) for a consideration of Lords’ amendments.
This is the last and only chance the Government has to follow up words with actions. They must go beyond Wednesday’s ministerial statement and allay the concerns once and for all by amending the Bill’s Section 122 notices as well excluding use of the IPA in conjunction with the Bill.
Our report sets out how the government can get this right. We’re running out of time. We hope that the government will see sense and put down amendments to fix the backdoor in the Online Safety Bill.
A new report from Index on Censorship raises the alarm proposed legislation that could lead to unprecedented and chilling surveillance of British citizens under the Investigatory Powers Act.
Clause 122 of the Online Safety Bill provides Ofcom the means to break encrypted messaging services through ‘technology notices’ served without legal oversight. Once ‘Accredited Technology’ is used to break encryption, the Home Office has the power to use “bulk surveillance warrants” under the Investigatory Powers Act: providing access to encrypted private messages en masse for the first time.
Without urgent clarification in Parliament, there is a risk that security services such as MI5 can compel technology companies who operate encrypted messaging services to interfere with user communications or acquire masses of data in secret. There is no clarity to date on whether Ofcom would be notified under such circumstances nor whether Ofcom themselves could be subjected to a bulk surveillance warrant as a result of the data insights they gain in their role as an independent regulator.
The long-standing campaign against the use of encryption technology has now seemingly culminated in a two-pronged legislative attack against British rights to privacy and freedom of expression. This report outlines the (1) meaning of new enforcement powers under the Online Safety Bill, (2) the Surveillance Gateway that is being opened, (3) proposed reforms to the Investigatory Powers Act and (4) the key questions that Parliament urgently needs answers on.
On Monday, 11 September 2023, the House of Commons will review the Online Safety Bill for the first time in nine months in which they will decide whether they accept the Government’s amendments to introduce mass surveillance on British people and to sign off on a massive curtailment of journalistic freedoms.
Download the report here or read it below.
[vc_row][vc_column][vc_column_text]
Threats to free speech and freedom of expression can come from a range of different places. Most often it is the despotic tyrants who use fear and violence to crush dissent but sometimes, it comes from the unintended consequences of those trying to control something new.
With advancements in AI, online advertising and digital news outlets, those with both the power and responsibility to regulate the internet are grappling with complex new challenges at a time when more and more people want to protect their online rights.
We’ve seen this happen in the UK as the Online Safety Bill slowly grinds through Westminster and lawmakers try to find new ways of protecting both users and free speech.
But it’s not just the UK which is struggling to find that balance: there is another piece of proposed regulation in Europe that is rapidly becoming a potential threat to freedom of speech and freedom of expression.
Since 2016, post the Brexit referendum and the election of Donald Trump, there has been much discussion about proposals to regulate political advertising online. Seven years later, the European Union is finally taking its first steps into this tangled web and trying to come up with rules which will seek to regulate exactly how political advertising can work inside the union.
As it is currently drafted, Europe’s political ads regulation could have a huge chilling effect both for Europeans and for those who rely on the EU’s protection of expression. We would all agree that scrutiny of, and transparency in, political campaigning is a cornerstone of a thriving and functioning democracy. To be able to hold political parties and their leaders to account for what they say and do is integral to democratic societies.
Paid-for political advertising is nothing new. It has been a part of traditional campaigning across Europe for years. From newspapers to billboards, political actors have always paid to get their message across to the electorate.
But the EU is proposing to go much further than regulate just paid political online advertising, their draft regulation could include any content that could be deemed to advance a political view.
This sweeping definition would include unpaid content, created by citizens or grassroots campaigners, regardless of whether someone has paid for the content to be placed.
Online content including YouTube videos and tweets from members of the public could suddenly become subject to strict rules on what they can and cannot say, with legal consequences for platforms if content deemed to be political is not removed within a tight 48-hour window.
Imagine the silencing of online public commentary during the French Presidential debates or the Irish Dail elections if platforms are suddenly required to regulate every comment offered by every pundit, every journalist or every citizen. Such a broad definition would likely lead to the zealous over-removal of any content deemed political for fear of penalty, opening the door for manipulation by tyrants, bad-faith actors and political opponents looking to limit freedom of speech online.
Even analysis about a political party’s electoral fortunes might be caught in the net and that is before we even know what the Commission plans for citizen journalism or those who run third-party campaigns against extremists.
What is more, citizens and campaigners in countries where freedom of speech is not afforded the same protections as in the EU often rely on European media outlets for access to real news. If news commentary and debate in the run-up to elections is undermined then this poses a direct threat to Europe’s role as a human rights leader. Suddenly, voices of democracy based in Europe would be silent and all as a consequence of the EU trying to protect the integrity of their own democracy.
Allowing people to express their political views is a fundamental component of freedom of expression. Not only that, restricting online content in this way goes much further than any offline restriction on freedom of speech. Much like the Digital Services Act that we campaigned on in 2021, the unintended consequences of trying to regulate our digital world appears to affect our real-world freedoms.
Index’s position then, as it is now, is clear: what is legal to say offline should be the benchmark for what is legal to say online. Content created online to promote a legitimate political ideology, viewpoint or authorised candidacy should be afforded the same freedom of speech protection that it would enjoy offline.
We should all be more cautious about believing what we read on the internet – just as we should approach our daily newspaper reports with a healthy dose of scepticism. But, if politicians and political actors have a message they wish to promote, they should be free to do so, however much we might disagree with their view. If campaigners want to support or oppose what they see and hear online, we should welcome the discourse and accept that in a vibrant democratic society there will be differences.
Ultimately, the people who will properly regulate political advertising are the voters themselves. If they think they are being conned or hoodwinked, they will show that at the ballot box.
By all means, let’s ensure that the rules governing online adverts are the same as the rules governing offline campaigning. Let’s bring in the transparency and the openness that ensures a level playing field and a fair fight for politics offline but let’s not imperil political advertising or push out the marginal voices who so often rely on digital ads to be heard.
The EU must reconsider their online political ads regulation and use it as a chance to embed transparency, not eliminate debate.
At the end of the day, voters will have the final word but before then, they have a right to hear what is being said.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][three_column_post title=”You may also wish to read” category_id=”41669″][/vc_column][/vc_row]
After seven years of debate, five Secretaries of State and hours and hours of parliamentary discussion the Online Safety Bill has reached the second chamber of the British legislature. In the coming months new language will be negotiated, legislative clauses ironed out and deals will be done with the government to get it over the line. But the question for Index is what will be the final impact of the legislation on freedom of expression and how will we know how much content is being deleted as a matter of course.
The team at Index have been working, with partners, for several years to try and ensure that freedom of expression online is protected within the legislation. That the unintended consequences of the bill don’t impinge our rights to debate, argue, inspire and even dismiss each other on the online platforms which are now fundamental to many of our daily lives. After all, in a post Covid world, many of us don’t differentiate between time spent online and time spent in real life. They are typically one and the same. That isn’t to say however that as a society we have managed to establish social norms online (as we have offline) which allow the majority of us to go about our daily lives without unnecessary conflict and pain.
We’ve been working so intently on this bill not just because we want to protect digital rights in the UK but because this legislation is likely to set a global standard. So restrictions on speech in this legislation will give cover to tyrants and bad faith actors around the world who seek to use some aspects of this new law to impinge on the free expression of their own populations. Which is why our work on this bill is so important.
We still have two main concerns about the legislation in its current format. The first is the definition, identification and deletion of illegal content. The legislation currently demands that the platforms determine what is illegal and then automatically delete the content so it can’t be seen, shared or amplified. In theory that sounds completely reasonable, but given the sheer scale of content on social media platforms these determinations will have to be made by algorithms not people. And as we know algorithms have built-in bias and they struggle to identify nuance, satire or context. That’s even more the case when the language isn’t English or the content is imagery rather than words. When you add in the concept of corporate fines and executive prosecution it’s likely that most platforms will opt to over-delete rather than potentially fall foul of the new regulatory regime. Content that contains certain keywords, phrases, or images are likely to be deleted by default – even if the context is the opposite of their normal use. The unintended consequence of seeking to automatically delete content without providing platforms with a liability shield so they can retain the posts without being criminally liable will lead to mass over-deletion.
The second significant concern are the proposals to break end-to-end encryption. The government claim that this would only be for finding evidence of child abuse, which again sounds reasonable. But end-to-end encryption cannot be a halfway house. Something is either encrypted to ensure privacy or it isn’t and therefore can be hacked. And whilst no one would or should defend those who use such tools to hurt children, we need to consider how else the tools are used. By dissidents to tell their stories, by journalists to protect their sources, by families to share children’s photos, by banks and online retailers to keep us financially protected and by victims of domestic violence to plan their escape. This is not a simple tool which can be broken at whim, we need to find a way to make sure that everyone is protected while we seek to protect children from child abusers. This cannot be beyond the wit of our legislators and in the coming months we’ll find out.