Index relies entirely on the support of donors and readers to do its work.
Help us keep amplifying censored voices today.
[vc_row][vc_column][vc_column_text]Today, Tuesday, the British government has finally responded to its own consultation on Online Harms. Our role at Index on Censorship is to defend free expression and free speech for all citizens wherever they live. This includes in the UK.
Index has significant concerns about the government’s proposals and their unintended consequences on our collective right to free speech. We are concerned about the global impact of these proposals and the message that is being sent by the British government – by instituting restrictive policies for social media companies – to repressive regimes who relentlessly seek to undermine the rights of their citizens.
While acknowledging that there are problems with regulation of online platforms, Index will be engaging with policy makers to try and make this legislation better in protecting our right to free expression.
Our key concerns are:
The British government is proposing a new classification of speech. Legal but harmful content, such as abuse, would be deemed illegal online but would be perfectly acceptable offline. A lack of consistency in our legal framework for speech is ludicrous and would have significant unintended consequences.
The penalties outlined in these proposals focus on the role of the platforms to regulate their online spaces – not their customers who seemingly have limited personal responsibility. It also fails to acknowledge that this is a cultural problem and therefore needs a carrot as well as a stick.
The proposals will fine social media companies for not complying with the new regulatory framework. Although ministers have issued warm words about protecting freedom of speech it seems highly unlikely that a platform would be sanctioned for deleting too much content, leaving social media companies to always err of the side of caution and delete challenging content even if it isn’t contravening the legislation.
These proposals seemingly advocate the permanent removal of significant amounts of content, thus curtailing a victim’s ability to prosecute, as once deleted by a platform there is no way to retrieve the content even by law enforcement. This includes evidence of terrorism atrocities; 23% of the Syrian War Crime Archive has already been deleted by the platforms. The lack of legal protections in place for the platforms to store this content (out of sight) for access by law enforcement, journalists and academics results in a lack of prosecution and analysis. Index believes a compromise would be the creation of a legal framework to allow social media platforms to create Digital Evidence Lockers.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][/vc_column][/vc_row]
[vc_row][vc_column][vc_column_text]The Rt Hon Jeremy Wright QC MP
Secretary of State for Digital, Culture, Media and Sport
100 Parliament Street
London
SW1A 2BQ
1 July 2019
Re: Online Harms White Paper
Dear Secretary of State,
We write as a group of organisations keenly interested in the government’s proposals for Internet regulation. We recently convened a day-long multi-stakeholder workshop to discuss the implications of the 2019 Online Harms White Paper and write to share the conclusions and findings from that event.
Organisations represented at the workshop included human rights NGOs, social media platforms, telecoms and media companies, news media, industry associations, parenting and child rights organisations, academia, think tanks, government departments and independent regulators. The aim was to bring together representatives from all relevant sectors, discuss differences of opinion and find areas of consensus.
One unanimous finding from the day was that “there is a need for a systematic approach to dealing with problematic content online, but the group did not support the adoption of a ‘duty of care’ approach”. Many participants noted that the concept of duty of care does not translate well from the offline to the online context, and as such it provides little clarity as to what duties can and should be expected of companies within scope of the OHWP.
Another key finding of the workshop was that all parties involved felt that whilst government departments had conducted outreach through this process, no exercise conducted by government had brought together all of the key groups in this process (including civil society organisations, childrens’ charities, media companies, global tech giants, British startups, and UK media/press) in a coherent way.
We believe that this risks resulting in a process dominated by some stakeholders and where policy is developed without a full overview of where stakeholders’ concerns and consensus really lie. We urge that after the formal consultation period closes, you consider acting to convene a comprehensive meeting with all relevant stakeholders formally to discuss key elements of the proposals and map a way forward for the proposals.
We welcome this opportunity to continue to engage with the government and look forward to your response.
Yours sincerely,
Oxford Internet Institute
Open Rights Group
Global Partners Digital
Index on Censorship
The Coalition for a Digital Economy
Cc Secretary of State for the Home Department, The Right Hon. Sajid Javid MP[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1561990413053-0b824c89-2f78-8″ taxonomies=”4883″][/vc_column][/vc_row]
[vc_row][vc_column][vc_column_text]
Recommendations
Introduction
The proposals in the government’s online harms white paper risk damaging freedom of expression in the UK, and abroad if other countries follow the UK’s example.
The proposals come less than two months after the widely criticised Counter-Terrorism and Border Security Act 2019. The act contains severe limitations on freedom of expression and access to information online (see Index report for more information).
The duty of care: a strong incentive to censor online content
The proposed new statutory duty of care to tackle online harms, combined with the possibility of substantial fines and possibly even personal criminal liability for senior managers, risks creating a strong incentive to restrict and remove online content.
Will Perrin and Lorna Woods, who have developed the online duty of care concept, envisage that the duty will be implemented by applying the “precautionary principle” which would allow a future regulator to “act on emerging evidence”.
Guidance by the UK Interdepartmental Liaison Group on Risk Assessment (UK-ILGRA) states:
“The purpose of the Precautionary Principle is to create an impetus to take a decision notwithstanding scientific uncertainty about the nature and extent of the risk, i.e. to avoid ‘paralysis by analysis’ by removing excuses for inaction on the grounds of scientific uncertainty.”
The guidance makes sense when addressing issues such as environmental pollution, but applying it in a context where freedom of expression is at stake risks legitimising censorship – a very dangerous step to take.
Not just large companies
The duty of care would cover companies of all sizes, social media companies, public discussion forums, retailers that allow users to review products online, non-profit organisations (for example, Index on Censorship), file sharing sites and cloud hosting providers. A blog and comments would be included, as would shared Google documents.
The proposed new regulator is supposed to take a “proportionate” approach, which would take into account companies’ size and capacity, but it is unclear what this would mean in practice.
Censoring legal “harms”
The white paper lists a wide range of harms, for example, terrorist content, extremist content, child sexual exploitation, organised immigration crime, modern slavery, content illegally uploaded from prisons, cyberbullying, disinformation, coercive behaviour, intimidation, under 18s using dating apps and excessive screen time.
The harms are divided into three groups: harms with a clear definition; harms with a less clear definition; and underage exposure to legal content. Activities and materials that are not illegal are explicitly included. This would create a double standard, where activities and materials that are legal offline would effectively become illegal online.
The focus on the catch-all term of “harms” tends to oversimplify the issues. For example, the recent study by Ofcom and the Information Commissioner’s Office Online Nation found that 61% of adults had a potentially harmful experience online in the last 12 months. However, this included “mildly annoying” experiences. Not all harms need a legislative response.
A new regulator
The white paper proposes the establishment of an independent regulator for online safety, which could be a new or existing body. It mentions the possibility of an existing regulator, possibly Ofcom, taking on the role for an interim period to allow time to establish a new regulatory body.
The future regulator would have a daunting task. It would include defining what companies (and presumably also others covered by the proposed duty of care) would need to do to fulfil the duty of care, establishing a “transparency, trust and accountability framework” to assess compliance and taking enforcement action as needed.
The regulator would be expected to develop codes of practice setting out in detail what companies need to do to fulfil the duty of care. If a company chose not to follow a particular code it would need to justify how its own approach meets the same standard as the code. The government would have the power to direct the regulator in relation to codes of practice on terrorist content and child sexual exploitation and abuse.
Enforcement
The new enforcement powers outlined in the white paper will include substantial fines. The government is inviting consultation responses on a list of possible further enforcement measures. These include disruption of business activities (for example, forcing third-party companies to withdraw services), ISP blocking (making a platform inaccessible from the UK) and creating a new liability for individual senior managers, which could involve personal liability for civil fines or could even extend to criminal liability.
Undermining media freedom
The proposals in the white paper pose a serious risk to media freedom. Culture Secretary Jeremy Wright has written to the Society of Editors in response to concerns, but many remain unconvinced.
As noted the proposed duty of care would cover a very broad range of “harms”, including disinformation and violent content. In combination with fines and potentially even personal criminal liability, this would create a strong incentive for platforms to remove content proactively, including news that might be considered “harmful”.
Index has filed an official alert about the threat to media freedom with the Council of Europe’s Platform to promote the protection of journalism and safety of journalists. Index and the Association of European Journalists (AEJ) have made a statement about the lack of detail in the UK’s reply to the alert. At the time of writing the UK has not provided a more detailed reply.
Censorship and monitoring
The European Union’s e-commerce directive is the basis for the current liability rules related to online content. The directive shields online platforms from liability for illegal content that users upload unless the platform is aware of the content. The directive also prohibits general monitoring of what people upload or transmit.
The white paper states that the government’s aim is to increase this responsibility and that the government will introduce specific monitoring requirements for some categories of illegal content. This gets close to dangerous censorship territory and it is doubtful if it could be compatible with the e-commerce directive.
Restrictions on freedom of expression and access to information are extremely serious measures and should be backed by strong evidence that they are necessary and will serve an important purpose. Under international law freedom of expression can only be restricted in certain limited circumstances for specific reasons. It is far from clear that the proposals set out in the white paper would meet international standards.
Freedom of expression – not a high priority
The white paper gives far too little attention to freedom of expression. The proposed regulator would have a specific legal obligation to pay due regard to innovation. When it comes to freedom of expression the paper only refers to an obligation to protect users’ rights “particularly rights to privacy and freedom of expression”.
It is surprising and disappointing that the white paper, which sets out measures with far-reaching potential to interfere with freedom of expression, does not contain a strong and unambiguous commitment to safeguarding this right.
Contact: Joy Hyvarinen, Head of Advocacy, [email protected][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1560957390488-02865151-710e-1″ taxonomies=”4883″][/vc_column][/vc_row]
[vc_row][vc_column][vc_single_image image=”97329″ img_size=”full” add_caption=”yes” alignment=”right”][vc_column_text]“Fake news”. The phrase emerged only a matter of years ago to become familiar to everybody. The moral panic around fake news has grown so rapidly that it became a common talking point. In its short life it has been dubbed the Collins Dictionary’s word of 2017 and the Bulletin of Atomic Scientists say it was one of the driving factors that made them set their symbolic Doomsday Clock to two minutes from midnight in 2019. It is a talking point on the lips of academics, media pundits and politicians.
For many, it is feared that “fake news” could lead to the end of democratic society, clouding our ability to think critically about important issues. Yet the febrile atmosphere surrounding it has led to legislation around the world which could potentially harm free expression far more than the conspiracy theories being peddled.
In Russia and Singapore politicians have taken steps to legislate against the risk of “fake news” online. A report published in April 2019 by the Department of Digital, Culture, Media and Sport could lead to stronger restrictions on free expression on the internet in the UK.
The Online Harms White Paper proposes ways in which the government can combat what are deemed to be harmful online activities. However, while some the harmful activities specified — such as terrorism and child abuse — fall within the government’s scope, the paper also declares various unclearly defined practices such as “disinformation” as under scrutiny.
Internet regulation would be enforced by a new independent regulatory body, similar to Ofcom, which currently regulates broadcasts on UK television and radio. Websites would be expected to conform to the regulations set by the body.
According to Jeremy Wright, the UK’s Secretary of State for Digital, Culture, Media and Sport, the intention is that this body will have “sufficient teeth to hold companies to account when they are judged to have breached their statutory duty of care”.
“This will include the power to issue remedial notices and substantial fines,” he says, “and we will consult on even more stringent sanctions, including senior management liability and the blocking of websites.”
According to Sharon White, the chief executive of the UK’s media regulatory body Ofcom, the term “fake news” is problematic because it “is bandied around with no clear idea of what it means, or agreed definition. The term has taken on a variety of meanings, including a description of any statement that is not liked or agreed with by the reader.” The UK government prefers to use the term “disinformation”, which it defines as “information which is created or disseminated with the deliberate intent to mislead; this could be to cause harm, or for personal, political or financial gain”.
However, the difficulty of proving that false information was published with an intention to cause harm could potentially affect websites which publish honestly held opinions or satirical content.
As a concept, “fake news” is frequently prone to bleeding beyond the boundaries of any attempt to define it. Indeed, for many politicians, that is not only the nature of the phrase but the entire point of it.
“Fake news” has become a tool for politicians to discredit voices which oppose them. Although the phrase may have been popularised by US President Donald Trump to attack his critics, the idea of “fake news” has since become adopted by authoritarian regimes worldwide as a justification to deliberately silence opposition.
As late US Senator John McCain wrote in a piece for The Washington Post: “the phrase ‘fake news’ — granted legitimacy by an American president — is being used by autocrats to silence reporters, undermine political opponents, stave off media scrutiny and mislead citizens.
“This assault on journalism and free speech proceeds apace in places such as Russia, Turkey, China, Egypt, Venezuela and many others. Yet even more troubling is the growing number of attacks on press freedom in traditionally free and open societies, where censorship in the name of national security is becoming more common.”
In Singapore — a country ranked by Reporters Without Borders as 151 out of 180 nations for press freedom in 2019 — a bill was introduced to parliament ostensibly intended to combat fake news.
Singapore’s Protection from Online Falsehoods and Manipulation Bill would permit government ministers to order the correction or removal of online content which is deemed to be false. It is justified under very broad, tautological definitions which state amongst other things that “a falsehood is a statement of fact that is false or misleading”. On this basis, members of the Singaporean government could easily use this law to censor any articles, memes, videos, photographs or advertising that offends them personally, or is seen to impair the government’s authority.
In addition to more conventional definitions of public interest, the term is defined in the bill as including anything which “could be prejudicial to the friendly relations of Singapore with other countries.” The end result is that Singaporeans could potentially be charged not only for criticising their own government, but Singapore’s allies as well.
Marte Hellema, communications and media programme manager for the human rights organisation FORUM-ASIA explains her organisation’s concerns: “We are seriously concerned that the bill is primarily intended to repress freedom of expression and silence dissent in Singapore.”
Hellema pointed out that the law would be in clear violation of international human rights standards and criticised its use of vague terms and lack of definitions.
“Combined with intrusive measures such as the power to impose heavy penalties for violations and order internet services to disable content, authorities will have the ability to curtail the human rights and fundamental freedoms of anyone who criticises the government, particularly human rights defenders and media,” Hellema says.
In Russia, some of the most repressive legislation to come out of the wave of talk about “fake news” was signed into law earlier this year.
In March 2019, the Russian parliament passed two amendments to existing data legislation to combat fake news on the internet.
The laws censor online content which is deemed to be “fake news” according to the government, or which “exhibits blatant disrespect for the society, government, official government symbols, constitution or governmental bodies of the Russian Federation”.
Online news outlets and users which repeatedly run afoul of the laws will face fines of up to 1.5 million roubles (£17,803) for being seen to have published “unreliable” information.
Additionally, individuals who have been accused of specifically criticising the state, the law or the symbols which represent them risk further fines of 300,000 roubles (£3,560) or even prison sentences.
The move has been criticised by public figures and activists, who see the new laws as an attempt to stifle public criticism of the government and increase control over the internet. The policy is regarded as a continuation of previous legislation in Russia designed to suppress online anonymity and blacklist undesirable websites.[/vc_column_text][/vc_column][/vc_row]