HateLab was approached to contribute to a special report on the rise of  online hate speech by ITV News.  The report highlights how online hate speech continues to be a pressing and intractable social problem.

HateLab’s recent paper published in the British Journal of Criminology established an ecological link between online hate speech and hate crime on the streets of London. A growing body of research now shows that online hate victimization is part of a wider process of harm that can begin on social media and then migrate to the physical world.

Online hate speech can be triggered by events, such as terror attacks, political votes and high profile court cases.  In the wake of these events, people can take to social media to express their prejudices, sometimes directly targeting minority groups and individuals.

In the first six months of 2019, Community Security Trust, a charity that supports the Jewish community, recorded 323 online antisemitic UK based incidents, representing 36% of all incidents. This represents an increase of 46% on the same period the year before. In the first six months of 2018, Tell MAMA, a charity providing support to victims of anti-Muslim hate crime, recorded 207 UK based incidents of Islamophobic online hate speech, representing 34% of all reports. The previous year online Islamophobic hate speech represented 30% of all recorded incidents.

Stonewall, a charity that supports lesbian, gay, bisexual and transgender people in the UK, found in its 2017 survey that 10% of respondents had suffered direct online hate abuse. This represents an increase of 5% from the 2013 survey. Disaggregated figures for 2017 show online victimisation was significantly higher for respondents who identified as transgender (26%), non-binary LGBT (26%), young LGBT (23%) and black, Asian and minority ethnic LGBT (20%). Witnessing online LGBT hate material aimed at others varied by age and ethnicity. Overall, 45% of respondents had seen such material, with the number rising to 72% for young LGBT and 66% for black, Asian and minority ethnic LGBT respondents. In 2013, 28% of LGBT respondents had encountered online hate material. The Galop 2016 LGBT+ Hate Crime Survey found that 30% of respondents had reported experiencing online hate crime.

In February of 2019 Ofcom published their annual report “Children and parents: media use and attitudes.”  Almost half of 12-15 year olds in 2018 reported encountering hateful content online, unchanged from the 2017 figure. However, in 2016 the figure was 34%. Of those that encountered this content, 42% took action, including reporting the post to the website and commenting on its inappropriateness.

In 2016 the European Commission, Facebook, Microsoft, Twitter and YouTube signed up to the code of conduct on countering illegal hate speech online, with Instagram, Google+, Snapchat and Dailymotion joining in 2018.   By signing they all agreed to introduce rules banning hateful conduct and to create mechanisms, including dedicated teams, for the review and possible removal of illegal content within 24 hours.  In 2018 almost all participating companies reviewed the majority of notifications sent to them within 24 hours, and 72% of these posts were removed, which was a slight improvement on the year before.  In 2016, when monitoring first began, 40% of participating companies reviewed the majority of notifications sent to them within 24 hours, with only 28% of these posts being removed.  In each round of monitoring all companies but Twitter have increased removal rates.  Xenophobia (including anti-migrant hatred) was the most commonly reported grounds of hate speech (17%) followed by sexual orientation (16%) and anti-Muslim hatred (13%).

Governments and tech companies cannot solve the problem alone, and HateLab is conducting experiments to determine if social media users themselves can stem the spread of hate by using counter-speech. Counter-speech is any direct or general response to hateful or harmful speech which seeks to undermine it. Influential speakers can favourably influence discourse through counter speech by having a positive effect on the speaker, convincing him or her to stop propagating hate speech or by having an impact on the audience – either by communicating norms that make hate speech socially unacceptable or by ‘inoculating’ the audience against the speech so they are less easily influenced by it.

Initial HateLab results show that counter-speech is effective in stemming the length of hateful social media conversations when multiple unique counter-speech contributors engage with the hate speech producer. However, not all counter speech is productive, and evidence shows that individuals that publicly use insults against hate speech producers often inflame the situation, resulting in the production of further hate speech.  HateLab will publish results from a quasi-experimental study in the summer of 2020 that identifies which types of counter-speech are most effective.