How to police ‘digital wildfires’ on social media

Photo- A picture paints a thousand words, which can be manipulated into a false narrative.Shutterstock_1000wordsHelena Webb, University of Oxford and Marina Jirotka, University of Oxford

On October 29 2012, New York City was hit by Hurricane Sandy. It was a time of great worry in the city and many people turned to social media platforms such as Twitter and Facebook to gather and share news about flooding, power shortages, damage to property and more.

During this time, a Twitter user known as @ComfortablySmug posted a series of “breaking news” updates about the effects of the hurricane. These included reports that the stock exchange had flooded, Manhattan was going to experience a total power outage and that the subway system would be closed for a week. These updates were frightening but false.

Despite the lack of evidence behind @ComfortablySmug’s claims, the posts nevertheless spread rapidly through retweets and some were reported as fact on television news channels. It took time for the false claims to be refuted by the organisations involved, and the propagation of this content undoubtedly helped to heighten fears during the hurricane.

In a 2013 report, the World Economic Forum described the posts made by @ComfortablySmug during the hurricane as threatening a “digital wildfire”. Digital wildfires break out when content that is either intentionally or unintentionally misleading or provocative spreads rapidly with serious consequences.

According to the WEF report, the contemporary popularity of social media and the ways in which platforms facilitate the spread and sharing of content mean that potential digital wildfire scenarios arise frequently in modern society.

They are particularly likely to spark in times of tension; for instance, there have been numerous cases of unverified and/or inflammatory content spreading rapidly on social media in the aftermath of recent terrorist events. A doctored image of Canadian Sikh journalist Veerender Jubbal falsely connecting him to terror crimes went viral following both the attack on Paris in November 2015 and the Nice attack in July 2016.

The spread of further false rumours following the Nice attack –- for instance that the Eiffel Tower was on fire, and that some city residents had been taken hostage – led to the French government appealing to social media users to only share information from official sources.

Evidently, digital wildfires can have devastating consequences for the reputation and well-being of individuals, groups, communities and even entire populations. Meanwhile, the speed with which content spreads can make it very difficult for official agencies to respond in a timely manner: by the time the spread of content slows or stops, massive damage may already have been done. But what can be done to limit or prevent the spread of this misinformation?

If we accept that digital wildfires can cause massive damage through the rapid propagation of misleading or provocative content on social media, we might also ask how they can be governed and regulated. The effects of these “global risk factors”, as the WEF describes them, can in theory be limited – the question is how? The answer is of relevance to a large number of groups: policy makers, social media platforms, law enforcement organisations, educators, civil society, anti-harassment groups, and social media users themselves. It is, however, an immensely complex topic.

First, it can be difficult to identify practical solutions. For example, we could consider using legal mechanisms to punish individuals for making inappropriate posts. However, the cross-national nature of internet use complicates matters of jurisdiction and, in any case, legal punishments are applied retrospectively and do nothing to manage digital wildfires in real time.

Second, we face ethical questions over freedom of speech. Any attempts to block individuals from posting or sharing content – bar some extreme forms – would likely be viewed by many groups, including the major social media platforms, as an unacceptable barrier to individuals’ rights to express themselves freely.

In our research project, we seek to identify opportunities for responsible governance mechanisms that can manage the harms caused by the spread of misleading and/or provocative content, while also protecting freedom of speech.

Since starting work in 2014, we have come to focus on self-governance as a potentially responsible and effective means of regulating content. Social media users would manage their own and others’ online behaviours – for example, by posting to correct false information, dismiss rumours, counter hate speech etc – and it could be supported through further technical mechanisms. Although self-governance alone is unlikely to ever be fully effective, and can also risk spiralling into online “shaming” and “digilantism”, it does have the capacity to work in real time and prevent the mass spread of posts that can result in a digital wildfire.

As our project continues, we are undertaking computational modelling work to examine the impact of self-governance practices on the propagation of content. We are also engaging with stakeholders to assess how further measures, such as education programmes and online community building, can encourage, consolidate and enhance self-governance on social media in order to limit the threat posed by digital wildfires.


This article was written by Marina Jirotka with input from her colleagues on the Digital Wildfire project team: Helena Webb, William Housley, Rob Procter, Adam Edwards, Bernd Stahl, Pete Burnap, Omer Rana and Matthew Williams.

Although Twitter is an open platform, there is debate among researchers over the extent to which it is ethically appropriate to reproduce tweets in publications and bring them to the attention of a wider audience. The tweets in this article have been selected as they come from institutional sources or from accounts/users that have already been in the public eye for some period of time.


Helena Webb, Research Associate – Human Centred Computing, University of Oxford and Marina Jirotka, Professor of Human Centred Computing, University of Oxford

This article was originally published on The Conversation. Read the original article.


Photo: A picture paints a thousand words, which can be manipulated into a false narrative.Shutterstock/1000words