Scientists have developed an artificial intelligence (AI) system that could help counter hate speech directed at disenfranchised minorities such as the Rohingya community.
The system developed by researchers from Carnegie Mellon University in the US can rapidly analyse thousands of comments on social media, and identify the fraction that defend or sympathise with voiceless groups.
Human social media moderators, who could not possibly manually sift through so many comments, would then have the option to highlight this "help speech" in comment sections, the researchers said.
"Even if there's lots of hateful content, we can still find positive comments," said Ashiqur R KhudaBukhsh, a post-doctoral researcher at Carnegie Mellon University's Language Technologies Institute (LTI) said.
Finding and highlighting these positive comments might do as much to make the internet a safer, healthier place as would detecting and eliminating hostile content or banning the trolls responsible, the researchers said.
The Rohingyas, who began fleeing Myanmar in 2017 to avoid ethnic cleansing, are largely defenseless against online hate speech, they said.
Many of them have limited proficiency in global languages such as English, and they have little access to the internet.
To find relevant help speech, the researchers used their technique to search for more than a quarter of a million comments from YouTube in what they believe is the first AI-focused analysis of the Rohingya refugee crisis.
The ability to analyse such large quantities of text for content and opinion is possible because of recent major improvements in language models, said Jaime Carbonell, LTI director and a co-author on the study.
The researchers presented their findings at the Association for the Advancement of Artificial Intelligence annual conference in New York City, US.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.