skip to content

Social Media and Human Rights

 

Encryption, Anonymity & Human Rights: Workshop Summary

By Rebekah Larsen and Ella McPherson

Image by kris krüg, 'Manufactured Security,' cc via Flickr

On 29 June 2015, the Centre of Governance & Human Rights (CGHR) and Amnesty International’s Technology and Human Rights team co-hosted an exploratory workshop on encryption, anonymity, and human rights.  Given that documenting and transmitting information about human rights violations can put human rights witnesses and fact-finders in danger, secure communication is core to human rights work.

The rate at which this work, in tandem with communicative activity in general, has moved into the digital space has underscored the importance of the human rights perspective in ongoing domestic and international debates on encryption, anonymity, and surveillance.  Rallying points in these debates include the recent report to the United Nations Human Rights Council by the Special Rapporteur on the promotion and protection of the right to freedom of expression and opinion, as well as the UK’s Investigatory Powers Bill, proposed earlier this year.

Workshop attendees from NGOs and academia (computer science, the social sciences, and law) participated in sessions focused on the evolving technical, socio-political, legal, and practical aspects of encryption and anonymity.  Our aim was to foster knowledge exchange between participants with an eye to uncovering core opportunities and risks for human rights work and potential areas for collaboration.  As such, we divided the day into the four sessions summarized below, culminating in a session on the ways forward for the protection and promotion of encryption and anonymity.  The workshop formed part of CGHR’s recently launched research theme, Human Rights in the Digital Age.

Encryption masterclass

Any inter-disciplinary discussion is well-served by ensuring participants have the same baseline technical understanding, which we achieved through a masterclass on encryption and anonymity led by a researcher from Cambridge’s Computer Laboratory.  Encryption is the mathematical manipulation of information to render it readable solely by the person intended to receive it, while the complementary but separate condition of anonymity refers to successfully concealing one’s identity.  (For more background, watch Steven Murdoch on anonymity or read Frank Stajano on computer security.)

These age-old aspects of communication are cornerstones of computer security and thus of our privacy online.  The masterclass accordingly kicked off with descriptions of public key cryptography for encrypting digital messages and IP-masking tools such as Tor that support anonymity.  We spent a considerable amount of time understanding threats to computer security.  We learned, for example, that the obfuscation of identifiers such as name and IP address alone does not necessarily provide anonymity; metadata generated by our use of ICTs, such as the pattern of our locations tracked and transmitted by our mobile phones, may reveal our identities just as reliably.

We also learned that the math underpinning cryptography is so robust that, interestingly, most threats to encryption arise not from machine weaknesses but from taking advantage of human weaknesses.  This usually occurs through deception aimed at the end-points of communication, i.e. you on your laptop.  Man-in-the-middle attacks, where someone pretends to be the interlocutor you are trying to reach, are a case in point.

Digital signatures and website security certification offer a measure of protection against these attacks.  The latter verifies that websites belong to whom they purport to belong, so it is worth paying attention to those pesky pop-up warnings about security certificates!  

Another threat to encryption comes from brute-force attacks, where the assailant uses sustained trial-and-error attempts to uncover passwords and keys.  Computer security is also imperilled by the identification of security holes in software before that software’s developer discovers and patches them.  These holes – known as ‘zero day vulnerabilities’ – allow for zero day exploits, code that hackers use to install malware through the software’s security hole and onto a device using that software.  A robust market for zero-day vulnerabilities and exploits exists, composed of a variety of buyers including anonymous bidders; the software developers themselves; and states, some of which are outbidding the software developers to add the flaws to their digital arsenals.  The UK’s GCHQ, for example, was recently found to have zero day exploits in its repertoire of hacking tactics known as ‘computer network exploitation’ or CNE.

The above are generally ad hoc threats to computer security; systematic threats, however, arise from government policies to weaken or ban encryption – the topic we next turned to in the workshop.

Efforts by states to ban or control the use of encryption and anonymity

The aim of this session was to discuss how states are limiting the use of encryption, the narratives they are using to justify this, and the extent to which encryption interferes with legitimate law enforcement and intelligence activities.  We kicked off with a reference to Prime Minister David Cameron’s January 2015 comment that no means of communication should exist that the government cannot access ‘in extremis.’  

Though exactly what the government might want is unclear, many interpreted this statement as an expression of interest in encryption ‘backdoors’ (a statement echoed in Cameron’s recent response to a question about the privacy policies of Google, Twitter and Facebook in July 2015).  These backdoors, built into encryption technology, would – in theory – allow the government, and only the government, to read communications encrypted by that technology.

Cameron’s comments were positioned in the context of the threat of terrorism and relate to similar statements by the security agencies, such as the Director General of the MI5’s view that ‘the dark places from where those who wish us harm can plot and plan are increasing.  We need to be able to access communications and obtain relevant data on those people when we have good reason to do so.’  This view has been echoed by the director of the FBI, who recently argued that the UN-proclaimed terrorist group Islamic State (also known as ISIS) benefits from the use of end-to-end encryption.

Statements such as these, which posit national security as in tension with computer security, have historical precedent.  Workshop participants noted the parallels between the current legislative climate related to encryption and the ‘Crypto Wars’ of the 1990s.  The debate then was about the U.S. government’s intent to put an encrypting microchip – the ‘Clipper Chip’ – into individual telephones.  The government’s aim – ultimately thwarted – was to give itself a backdoor into communication over these phones through holding a copy of each device’s private cryptographic key.    

Now, as then, the arguments for limiting encryption and anonymity rest on powerful discourses.  We discussed the power of the ‘nothing-to-hide’ discourse and its relationship to the public’s tolerance for state surveillance.  We wondered about the connection between this tolerance for surveillance – which alternatively might be ignorance or resignation – and technology companies’ encouragement of information-sharing and normalization of data-gathering.

We also talked about the perception that anonymous communication and individuals’ right to privacy can conflict when the former is used to spread rumours about the latter.  This was the legal argument behind Brazil’s 2014 banning of the Secret app.

Though the calls for limits on encryption and anonymity seem increasingly audible among policy-makers, publics and the technology industry are not necessarily echoing them.  For example, Amnesty’s recent poll of publics in 13 countries revealed variable but consistently strong support for technology companies’ protection of users’ personal information from governments.

In the wake of the Snowden revelations about government surveillance programs, some technology companies have introduced or strengthened the security of their products. Two of the most prominent examples include WhatsApp, which introduced end-to-end encryption for its instant messages, and Apple, which enhanced content encryption on the iPhone 6.  Such actions have put these companies at odds with policymakers in the encryption arena, particularly in the UK.

These companies’ encryption measures stymie the government’s efforts to access the content of individuals’ communications, as the companies themselves cannot decrypt it.  Added to this is the problem for states that online cross-border communications are subject to a variety of national jurisdictions.  In particular, many of the major technology companies are headquartered in the U.S., and U.S. laws prohibit these companies from handing over their users’ private information to the security services in the U.K.

Given this complicated terrain, it is no wonder that the search is on for technical and legal middle roads to the privacy versus security debate.  Although workshop participants had different perspectives on the balance of security versus privacy, the consensus was that backdoors to encryption are not the solution. 

For one thing, alternative middle roads (though each with its own shortcomings) exist that better meet principles of necessity and proportionality.  These include targeted surveillance, targeted decryption orders enacted with judicial oversight, U.S.-European privacy standards, and an international treaty providing a ‘front door’ framework for technology companies to disclose customer data to states.

For another, computer security experts argue that encryption backdoors and other proposals for governments’ exceptional access introduce so much complexity that they are technologically cumbersome – if not impossible.  Their complexity also creates new opportunities for security breaches; for example, if government agents can use a backdoor, it is highly likely that hackers will figure out how to gain access as well.  Implementation is also complicated, as what is to stop relatively repressive regimes also demanding backdoors?  (For more on this, read the July 2015 report, ‘Keys Under Doormats: Mandating Insecurity by Requiring Government Access to all Data and Communications.’)

This report renders moot any government demands for exceptional access to encrypted communication by shifting the spotlight away from the desirability of such access to its technical feasibility.  But technical feasibility is not the only argument against weakening encryption.  Crucial sectors of society need encryption to safely do their work – not least the human rights community.

The importance of encryption and anonymity for human rights work

This third session of the workshop addressed how limiting encryption and anonymity might harm human rights work – and indeed contravene human rights themselves.  

First, we discussed how proposed and actual limits create risks for human rights defenders.  Sensitive human rights communications are often about the very governments that would then have access to these private communications.  Furthermore, outright bans would make it much easier to single out human rights defenders should they choose to deploy encryption or anonymity.  The example was brought up of six bloggers and three journalists in Ethiopia who were arrested and charged in 2014 with alleged crimes including the use of ‘Security in a Box,’ a guide on encryption and other security tactics for human rights defenders.

Second, limits to encryption and anonymity raise problems related to human rights defenders’ digital literacy – whether they have it or not.  Human rights practitioners in attendance reported, for example, that developments in ICTs vastly outpace human rights defenders’ abilities to keep up their digital literacy.  Online security has not traditionally been part of the human rights skill set; time and resources are needed to surmount this steep learning curve.

Human rights NGOs are by no means alone in lacking digital literacy on security.  In his 2015 report on ICTs and the right to life (for which CGHR provided research support), the UN Special Rapporteur on extrajudicial, summary or arbitrary execution commented on the same problem at the UN Human Rights Council.  For example, several Human Rights Council mechanisms solicit reports of human rights violations over email without indicating attendant risks or strategies for more secure communications. 

Human rights defenders who, in contrast, know all too well about the risks of online communication contribute to another challenge for human rights reporting.  Concerns about digital surveillance have had an observable chilling effect on reports of violations from civilian witnesses and activists on the ground.  In closed country contexts, the implication is a decrease in information crossing borders to connect with the international human rights community.   The threat of surveillance does not stop, however, at these borders; Amnesty International’s Secretary General, following the UK government’s admission in July that its agencies had been intercepting Amnesty’s communications, stated: ‘How can we be expected to carry out our crucial work around the world if human rights defenders and victims of abuses can now credibly believe their confidential correspondence with us is likely to end up in the hands of governments?’ 

Nor does the chilling effect stop at the human rights community, but rather is spreading across sectors and societies.  A Pew Research Center poll, for example, found that a third of American adults changed their online behaviour post-Snowden revelations to conceal it from the government.

Third, surveillance and the threat of surveillance have a chilling effect on the exercise of particular human rights.  The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has argued strongly in favour of encryption and anonymity to protect the rights within his mandate.  In his 2015 report to the Human Rights Council, the Special Rapporteur made the case that we increasingly use digital platforms to help us form opinions, that these platforms create traces of our opinion forming processes such as search histories, and that – particularly since the right to freedom of opinion is absolute – we need security measures to protect it. 

By positioning encryption and anonymity favourably with respect to the protection of human rights, the Special Rapporteur’s report reframes them in the public debate around limits and bans.  In the last session of the workshop, we turned to this public debate in more detail to identify interjections by which awareness of and support for encryption and anonymity might be built.

Protection and promotion of encryption and anonymity: Ways forward

This workshop session focused on how the group and others might facilitate the use of encryption and anonymity for human rights.  We began from the position that perception is a major impediment for this use – among members of the public, policy-makers, and human rights defenders themselves.  In John Naughton’s words, it is time to ‘reboot the discourse about democracy and surveillance.’  We spent much of this session identifying discourses harmful to encryption and anonymity and brainstorming how we might challenge them with counter-discourses.   

Harmful discourses include, of course, the nothing-to-hide argument.   In the human rights world, this discourse has the effect of tarring human rights defenders using encryption and anonymity with the feather of criminality.

One reason why the nothing-to-hide argument has gained traction, we speculated, may be that privacy is not nearly as much of a concern for the general public as one might expect – or that individual privacy may not be the narrative that motivates citizens to support and deploy encryption and anonymity.  That being said, polls do show that individuals, whether from the general population or young people in particular, are concerned with the privacy of their online data.  Some argue that this ‘privacy paradox’ – a disjuncture between worry and action – results from a ‘knowledge gap’ between what users think is happening to their data and what is actually happening to it.

One tactic to boost the use of encryption and anonymity, therefore, is raising public awareness of state and corporate surveillance practices.  Another we discussed involves reframing encryption and anonymity as being not of individual benefit but rather of societal benefit.  The issue becomes then not about having nothing to hide but about supporting those – like human rights defenders and other whistleblowers – whose pursuit of accountability and justice on behalf of society rely on secure communications.

If whistleblowers are the only users of encryption and anonymity in a geographic area, they stick out like sore thumbs.  This is because, while others may not be able to access the content and/or authors of whistleblowers’ encrypted and anonymous communication, they may be able to see that these whistleblowers are using encryption and anonymity tools.  For example, the NSA tracks visits to the Tor Project’s website.  Encryption and anonymity work best when all of us use them – and use them for banal purposes like ordering pizza – so that those who need them most can hide in a crowd; in technical terms, the bigger the anonymity set, the better (generally). 

Of course, an implication of this feature of encryption and anonymity is that human rights activists should have a good understanding of encryption and anonymity take-up in their surrounding areas; this should be part of their threat modelling assessment of digital security, underpinned by an assumption that no online communication is ever 100% secure.  If take-up is low, activists might be better served hiding in plain sight – by communicating via Skype, for example, and thus being part of one conversation among a plethora.

Another useful counter-narrative we discussed is the essentiality of encryption for the digital economy.  Online banking depends on encryption, as does online shopping.  You are probably familiar with seeing ‘https’ in website URLs and the padlock symbol in your browser’s address bar, both of which indicate that any information you send over that site will be encrypted.  We are all too aware of the illegal trade in our credit card and identity details, stolen through hacking into corporate databases containing information we entered ourselves when participating in the digital economy.  Weakening encryption would make our financial details that much more vulnerable.

For those of us who value democracy and the digital economy, then, using and supporting encryption and anonymity seem to be no-brainers – particularly given that targeted surveillance means that computer security and national security do not have to be at odds.  But are they really no-brainers? 

Another discourse we considered that impedes the use of encryption and anonymity is their reputation of technical inscrutability for the layperson.  While undoubtedly their technologies are complex, the right pedagogical approaches make them much more approachable – as evidenced by our encryption masterclass, which included humorous metaphors (we learned about encryption as a sausage machine, but a comparison to curtains works equally well!) and was taught via Socratic method.

Individuals’ aversion to learning about encryption and anonymity is understandable.  First, computer security is often a ‘secondary goal’ to the primary aims of communicating or browsing the web, so patience for comprehending it is limited.  Second, as we discussed, and as in line with a famous experiment titled, ‘Why Johnny Can’t Encrypt,’ a gulf exists between the language of programmers and the language of users.  This experiment demonstrated that two-thirds of their relatively well-educated test subjects could not figure out how to use PGP 5.0 to encrypt and sign an email, even though they were given an hour-and-a-half and full access to the PGP manual.

While the onus is certainly on those designing encryption and anonymity technologies to invest in their usability, other opportunities exist to increase the numbers of encryption and anonymity users.  For one, we as customers can advocate technology companies for automatic encryption or encryption by default, which requires no specialized knowledge – and indeed no conscious decision – to use. 

For another, those of us in the education sector and civil society can continue to provide spaces for learning and discussion about encryption and anonymity.  This is especially crucial in the run-up to the UK’s Investigatory Powers Bill, which – as mentioned in the workshop – will be instrumental for setting the global tone on such legislation.  Examples of such spaces that we discussed included media trainings to create more narrative nuance in reporting on encryption and anonymity; trainings for policy-makers and judges on the technical parameters of these tools; supporting the security element of schools’ computing curriculum; creating MOOCs; and authoring compelling accounts (a stellar example being the documentary Citizenfour) of the importance of encryption and anonymity and the power struggles surrounding them.  Over the next year, CGHR aims to contribute to this space – stay tuned!

 

About this website

This is the website for Ella McPherson's work related to her 2014-17 ESRC-funded research project, Social Media, Human Rights NGOs, and the Potential for Governmental Accountability.