New algorithm can spot Twitter cyberbullies
A new algorithm can identify Twitter accounts carrying out bullying and troll-like behaviour with 90% accuracy, researchers have said.
The machine learning software uses natural language processing and sentiment analysis on tweets to classify them as cyberbullying or cyberaggression.
The algorithm has been developed by researchers at Binghamton University in the United States.
The research comes as a number of high-profile personalities in the UK – including Gary Lineker and Rachel Riley – backed a new campaign to ignore and report abusive messages to help cut the spread of hate on social media.
The researchers said their software had successfully identified bullying and aggressive accounts on Twitter with 90% accuracy.
Jeremy Blackburn, a computer scientist on the research team, said the new algorithm used information from Twitter profiles as well as looking for connections between accounts.
“We built crawlers – programs that collect data from Twitter via a variety of mechanisms,” he said.
“We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them.”
He said that looking for links between users can help differentiate between aggressive behaviour and regular interactions.
“In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples.”
While the computer scientist hoped the tool could be used to react to cyberbullying, he admitted more needed to be done to proactively cut abuse on social media platforms.
“One of the biggest issues with cyber safety problems is the damage being done is to humans, and is very difficult to ‘undo’,” he said.
“For example, our research indicates that machine learning can be used to automatically detect users that are cyberbullies, and thus could help Twitter and other social media platforms remove problematic users.
“However, such a system is ultimately reactive: it does not inherently prevent bullying actions, it just identifies them taking place at scale.
“And the unfortunate truth is that even if bullying accounts are deleted, even if all their previous attacks are deleted, the victims still saw and were potentially affected by them.”
Social media platforms have come under increased pressure to do more to protect their users from hateful and harmful content after a number of concerns were raised around the impact of such sites on mental health and wellbeing, particularly among young people.
A government white paper published earlier this year proposed the introduction of a statutory duty of care, which would compel social networks to protect their users or face large fines.