skip to primary navigationskip to content

Evidence for the House of Commons Science and Technology Select Committee

McPherson Parliament


I recently gave evidence to the House of Commons Science and Technology Select Committee.  This was based on written evidence co-authored with my colleague, Anne Alexander, and submitted to their ongoing inquiry into social media data and real time analytics.

Both Anne and I research the use of social media during contested times; Anne looks at its use by political activists and labour movement organisers in the Arab world, and I look at its use in human rights reporting.  In both cases, the need to establish facticity is high, as is the potential for the deliberate or inadvertent falsification of information.  Similarly to the case that Carruthers makes about war reporting, we believe that the political-economic, methodological, and ethical issues raised by media dynamics in the context of crisis are bellwethers for the dynamics in more peaceful and mundane contexts. We therefore wished to highlight the following points to the Committee:

1.  Social media information is vulnerable to a variety of distortions – some typical of all information, and others more specific to the characteristics of social media communications
For instance, the commercial nature of social media platforms intervenes in the content of social media, including with respect to what content users see and the prompts they receive to create content.  Political contexts also influence social media information.  This is evident in contemporary Egypt, where it has been used for revolutionary ends (organising activism) as well as for counter-revolutionary ends (discrediting doctors and academics).

Compared to information collected via purpose-built methods for researching social processes like interviews and surveys, information harvested from social media can be relatively devoid of clues as to these contexts of production.  This may be due to metadata loss in uploading a video to Youtube, for example, or to the relative ease of reproduction and circulation of digital information, which may disembody the information from the time, place, and source of its production.

Social media’s affordance of disembodiment facilitates the falsification of information for those so inclined.  For example, Witness has demonstrated that a video of a man being water-cannoned has circulated on Youtube at different times labelled as taking place variously in Venezuela, Colombia, and Mexico.  Because of the distortions arising from contexts of production as well as because of potential falsifications, social media information – like all information – must be verified when used to establish events.

2.  If social media information is used to establish events, it must be verified; while technology can hasten this process, it is unlikely to ever occur real time due to the subjective, human element of judgment required
How we determine what we think is the truth about an incident tends to hinge on a subjective - and thus human - judgment of the evidence at hand.  This judgment may be subjective, but it is also an expertise that can be developed with respect to knowing the tools, sources, and methods through which information and source veracity can be corroborated.  This corroboration takes time, as it may involve identifying details about the information in question such as place, time, and source of production; unearthing and cross-referencing other types of evidence; and checking information with existing online and offline networks of sources.  Technology can hasten the verification process, as evident in Diakopoulos, De Choudhury, and Naaman’s Seriously Rapid Source Review interface, which, among other ‘computational information cues,’ visualises the geolocations of a source’s network.  The implication of this cue is that a source who knows a lot of people in a particular location is more likely to share reliable information about that location over social media that someone with little to no network connections in that area.  Though such analytics informing social media verification could be generated real time, verification itself is unlikely to ever occur in real time because of the human element of judgment involved.  

3.  Verifying social media information may require identifying its source, which has ethical implications related to informed consent and anonymization
Furthermore, as verification practices often involve identifying the source of the information in order to evaluate that source’s credibility, their deployment against social media information raise ethical concerns related to the particular difficulty of securing informed consent and guaranteeing anonymity when researching social media.  For example, agreeing to a social media platform’s T&Cs does not necessarily correspond to informed consent, as research has demonstrated that users assent without reading these long documents in order to open their accounts (Facebook’s terms of service, for example, would take more than two hours to read!).  Anonymization is difficult to achieve in the context of databases that can be cross-referenced as well as difficult to future-proof in the face of increasingly sophisticated technology such as that related to automatic facial recognition.

4.  Another way to think about social media information is as what Hermida calls an ‘awareness system,’ which reduces the need to collect source identities; under this approach, researchers look at volume rather than veracity to recognise information of interest
Using social media information to identify public interest can be done real time and does not necessarily require the collection of source identities, thus avoiding the above ethical concerns.  Volume of interest can then be a starting point for an investigation triangulating a variety of information channels and methods – methods, like the interviews and surveys that the government has traditionally employed, which are more conducive to securing informed consent and can provide more cues about the context of the information’s production.

Taken together, these points form a cautionary tale about some of the contextual, methodological, and ethical complications related to the government analysing social media information.  According to the questions that the Committee posed my panel (my co-panelists were John Preston, Professor of Professor of Education at the University of East London and Mick Yates, Visiting Professor at the Consumer Data Research Centre of the University of Leeds) and that preceding us (panelists were Timo Hannay, Managing Director of Digital Science, Carl Miller, Research Director of Demos’ Centre for the Analysis of Social Media, and Sureyya Cansoy, Director of techUK’s Tech for Business and Consumer Programmes), its members are considering this analysis from a number of perspectives: other topics of discussion included a possible skills gap in the UK with respect to this kind of research, what social media information’s relationship is to census data, the implications of the EU Data Protection Legislation for social media research, and the public’s awareness of what happens to their social media data.  I was delighted to have contributed to the Committee’s public inquiry, and urge the government to continue approaching social media analysis in such a transparent fashion.

About this website

This is the website for Ella McPherson's work related to her 2014-17 ESRC-funded research project, Social Media, Human Rights NGOs, and the Potential for Governmental Accountability.