Hate Speech and Social Media

Funded by: ESRC and Google

Funding: £124,986

Contributors: Williams, Burnap, Rana, Housley, Edwards, Voss, Procter & Knight

Funded between 2013 and 2014, this project developed a range of machine learning classifiers designed to identify online hate speech across the protected characteristics of race, religion, sexual orientation and disability.  These classifiers were used in the first study of online hate speech produced in the aftermath of a terrorist event.

Using Big Data to measure the online reaction to the murder of the Lee Rigby in 2013, the research focused specifically on the production and spread of racial and religious online hate speech and the Twitter battle between police and far-right political groups in the first 36 hours following the attack.

Key findings include:

  • Online hate speech has a ‘half-life’, which has significant implications for police and policy interventions in terrorist events;
  • Online hate speech in the aftermath of the Rigby murder peaked in the first 24 hours following the attack before declining sharply over the 15-day analysis window, suggesting that police need to focus their interventions on this stage to maximise the fight against online hate;
  • Far-right political groups and individuals were quick to use the attack to further their cause, and were more likely to produce tweets containing religious and racial hate speech;
  • Tweets from the far-right were more likely survive (be retweeted over longer periods) in the first 36 hours following the event, but were less likely to be retweeted by a large amount of Twitter users;
  • In the wake of the attack, tweets from police and the media were around five times more likely to be retweeted compared to all other tweets from other users;
  • The dominance of traditional media and police social media information flows in the aftermath indicates these are likely effective channels for the countering of rumour, speculation and hate.

The project was carried out using the ESRC-funded Cardiff Online Social Media Observatory (COSMOS) software which continues to be supported by the Social Data Science Lab at Cardiff University.  Major outputs from this project include ‘Tweeting the Terror: Modelling the Social Media Reaction to the Woolwich Terrorist AttackSocial Network Analysis and Mining and ‘Cyberhate on Social Media in the Aftermath of Woolwich: A Case Study in Computational Criminology and Big Data’, British Journal of Criminology.