skip to content

Social Media and Human Rights

 

Richard Dent is a PhD student in Cambridge’s Department of Sociology. His research examines 'social networking for social good’ in the UK, a growing online movement that includes the sharing economy and commons based peer production. Richard is conducting his PhD through an open access/open peer review process, sharing the main body of the PhD online whilst opening the text to invited contributors to make comments.

Who regulates the algorithms that influence our experience online? That is the question Joss Hands, a lecturer at Anglia Ruskin University conducting research into media and critical theory, asked the Researching (with) Social Media reading group in November 2014. Algorithms analyse much of what we do online, looking for specific keywords in free software platforms like Facebook or Gmail. This data can be sold to advertisers, shape our social experiences, or be subjected to surveillance. This can produce desirable outcomes. Users are advertised the products that relate to their lives. My Facebook feed focuses on friends that I often engage with rather virtual strangers. Terrorists can be intercepted by the security services. However, Hands suggests that the power of algorithms may need to be examined. This echoes Jarod Lanier’s concern about a filter bubble. Can we trust those who control the algorithms?

Facebook was recently criticised for using its main algorithm to bias user experience to more positive or negative content. The potential impact on certain individuals was unknown before the study, and no explicit consent for the experiment was asked of participants. Facebook’s response was to direct people to its terms and conditions, which allow the social media platform to conduct research experiments – although it seems this specific research clause was added after the study took place. In his talk, Hands pointed to the now common social media practice of algorithmic scanning for wedding announcements or the keyword ‘congratulations’ in order to target baby product advertisements. And we have no control over these personalised adverts as long as we use free services like Facebook or Gmail.

What effects are algorithms having on society? Who is accountable if things go wrong? This is a new social relationship between corporations and citizens. What are the boundaries of this relationship? If Facebook can influence the emotions of its users, it could feasibly use its algorithm to influence people towards specific political perspectives or worldviews. Scary stuff. Can Facebook users vote with their feet should these practices go too far? Rebecca MacKinnon recently suggested the idea of a union of Facebook users, a lobby group to keep Facebook in check. Whilst Google, Facebook and Apple hold a monopoly over the leading technology for computers, social networking, and digital communication, what choice do we have? Social networking disrupter Ello has pledged not to sell user data for use in algorithms. But they are still bound by the economics of scale. Can they realistically serve tens of millions, with all those server costs, without data monetization? Not everyone agrees.

This issue may be intractable. John Naughton and others have pointed out that when we use free services, we open ourselves to security algorithms as well as to corporate algorithms. Are we creating a world where too much of our lives is influenced by these algorithms – a tyranny of algorithms? Maybe. But as Naughton also points out, a fight back has begun

About this website

This is the website for Ella McPherson's work related to her 2014-17 ESRC-funded research project, Social Media, Human Rights NGOs, and the Potential for Governmental Accountability.