Google taps machine learning to help publishers identify trolls and toxic comments

Wouldn’t it be great if machine learning could be applied toward improving comments and other conversations online? Big and small publishers alike, from NYT to the site you’re now reading, are spending significant resources to stop trolls from bombarding readers with toxic comments.

A new Google tech based on machine learning strives to automate the process of sorting through millions of comments, helping identify and flag abusive comments that undermine a civil exchange of ideas.

In partnership with Jigsaw, Google launched Perspective, an early-stage technology based on machine learning that can help identify toxic comments on the web. The official Perspective API allows publishers to use this technology for their websites.

Google explains how it works:

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers.

Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

After the system has identified toxic comments, publishers can flag them for their own moderators to review and include them in a conversation. Readers could sort comments by toxicity, too, in order to surface conversations that matter. The system could even let commenters see the potential toxicity of their comment as they write it.

You think trolling isn’t such a big problem?

Thing again—The New York Times has an entire team charged with reviewing an average of 11,000 comments every day. Due to the sheer manpower required to review the comments, the paper has comments on only about ten percent of its articles.

Google and the Times have worked together to train machine learning models so that the moderators can sort through comments more quickly. When Perspective launches publicly and many more publishers embrace it, the system will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

“Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic,” said Google.

According to DataSociety, 72 percent of American Internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third of respondents said they self-censor what they post online for fear of retribution. It’s estimated that online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

Source: Google