Rarling up the net


Signify’s Research & Analytics Lead, Manvir Singh, breaks down the recent media attention around Grime artist - Wiley and his offensive tweets - illustrating the difficulty in identifying specific forms of abuse on social media.

An entry for the term ‘rarl’ on Urban Dictionary dated April 2020, describes it as "to cause a group or person to pay attention to something and get them to react. Likely originates from rile”. 

Within this entry, the origin and use of this term is attributed to be none other than Wiley, the Grime Producer/MC who became the topic of much debate after posting a tirade of Antisemitic posts on Twitter.

Wiley: "I'm gonna rarl up the net bruv” .

Flashpoints as far back as 2013’s Glastonbury rant and as recent as the Stormzy diss track saga in 2019, (see graph) show how Wiley embraces online spats. As the likes of Stormzy, Dave and AJ Tracey became the new faces of grime, the self-styled ‘Godfather’ still managed to retain a lot of media attention via his unfiltered rants, humour and what some of his fans and followers call ‘truths’.

Reactions on Twitter to Wiley’s “rarling’s” (over the last year).

It is not unusual to see Wiley in a war of words with Grime star Stormzy and at the same time, taking on Connor, 16, from Doncaster on a Monday morning. Whilst his candidness appeals to his followers, there have certainly been warning signs of a racist mindset before.

In 2011, after a dispute on Twitter with R&B artist Jay Sean, Wiley responded to fans with takes like “watch u little corner shop kids, I will smash up shops” and he also posted a series of tweets about the people of Cumbria calling them “inbreds” after being booed off stage at a festival in 2013. 

Wiley’s 2011 Twitter spat with R&B artist Jay Sean.

Wiley’s 2011 Twitter spat with R&B artist Jay Sean.

On 24th July 2020, Wiley took this racist behaviour to another level. In a social media tirade that continued through a whole day on all his channels, Wiley posted abusive and blatantly Antisemitic messages, specifically aimed at Jewish people.

It wasn’t only the content of his posts that shocked, the fact that Twitter allowed these posts to stay up for so long without taking action became another talking point from the incident.

As a reaction to this, celebrities, public figures and sections of the UK public undertook a 48-hour boycott of Twitter to send a message to social media platforms, urging them to do more to tackle Antisemitism.

The Tech Conundrum

Wiley claims that his comments are not Antisemitic and explained that much of the tirade was about a commercial falling out with his Jewish manager, John Woolf. Subsequent attempts at apologising have clearly missed the mark.

Whilst others speculate about the reasoning behind his comments and what the ramifications should be, our focus is drawn to the slow reaction from social platforms concerning Antisemitism and other types of abuse. This is particularly interesting for us as it highlights the limitations of machine learning systems to accurately monitor for racist abuse. It was ultimately not a tech solution that brought Wiley’s posts down, but a wider public expression of dismay, including thousands of reports, and the media attention his rant received.

This is the conundrum that tech companies face, where calls for swifter detection and removal of problematic content are demanded. There are evident and unsolvable biases within current content moderation systems, that are due to the use of rigid definitions / determinations used to detect abusive language and content.

There is space for these language detection systems to be improved, however, there will always be limits in any machine learning system that cannot be trained to understand nuance or allusion. Even with the technological resources available to Facebook, there is still a reliance on thousands of human moderators to get a handle over problematic content. The burden therefore rests on a human workforce, who work with strict guidelines, but inevitably leads to some elements of subjectivity. Over at Twitter meanwhile, the human moderators need to be prompted by human reporting, producing substantial delays.

In the case of Wiley’s posts, it was clear who and what he was talking about, given the context and the nature of his tweets, but as standalone posts some of these would be hard to detect through a moderation system. An example is where Wiley referred to Jewish people as “these lot”, “the real enemy”, “cowards”. Some of these posts may not have flagged suspicion in isolation, but the human understanding of his intentions - tied with streams of previous posts allow us to identify the blatant nature of Antisemitism within them.

This is without mentioning how moderation systems can be gamed. For instance, spammers have been observed to use terms like ‘Vi @ gra’ instead of Viagra to bypass filters in their messages. Even at a human level there was significant confusion as to what Wiley meant by the colloquialism “hold corn” – which happened to be one of his more extreme posts, and could be interpreted as literally inciting gun violence against Jews.

There is still much debate in the realms of academia and research that challenges whether there is a holistic approach to detecting abuse. Understanding Antisemitism is a different subject matter to understanding racism that black people face for example.

A useful example of subject matter understanding is how we are currently developing Signify’s Threat Matrix service to identify and detect online racist abuse. For example, one training that we have successfully developed allows us to tell the difference between the subject and object in a message, i.e: ‘ X should shoot more often’ rather than ‘someone needs to shoot X’. But training a machine to understand the historic nuance of racist tropes is a much more complicated challenge.

In this instance, the social media platforms could have acted quicker in removing and flagging Wiley’s messages on the day they were posted, however, technology is not yet strong enough to provide a complete solution. Even while holding these platforms to account, it is important to understand the limitations of technology as a whole.

The sad truth is that most Antisemitism on these platforms isn’t usually in the form of a full day tirade by someone who is notorious for ‘rarling the net’– most of it is happening in daily micro-aggressions in feeds by unknown individuals who cement the very tropes and stereotypes that Wiley flagrantly displayed.

——————————

Signify is an ethical data science company. To find out more about our work fighting racism in football, get in touch.

Previous
Previous

US Election: Voter suppression

Next
Next

Data Science v Online Abuse