Credit: AI Trends
By Rob Marvin, Associate Features Editor, PC Magazine
The internet can feel like a toxic place. Trolls descend on comment sections and social media threads to hurl hate speech and harassment, turning potentially enlightening discussions into ad hominem attacks and group pile-ons. Expressing an opinion online often doesn’t seem worth the resulting vitriol.
Massive social platforms—including Facebook, Twitter, and YouTube—admit they can’t adequately police these issues. They’re in an arms race with bots, trolls, and every other undesirable who slips through content filters. Humans are not physically capable of reading every single comment on the web; those who try often regret it.
Tech giants have experimented with various combinations of human moderation, AI algorithms, and filters to wade through the deluge of content flowing through their feeds each day. Jigsaw is trying to find a middle ground. The Alphabet subsidiary and tech incubator, formerly known as Google Ideas, is beginning to prove that machine learning (ML) fashioned into tools for human moderators can change the way we approach the internet’s toxicity problem.
Perspective is an API developed by Jigsaw and Google’s Counter Abuse Technology team. It uses ML to spot abuse and harassment online, and scores comments based on the perceived impact they might have on a conversation in a bid to make human moderators’ lives easier.
Perspective Amidst the Shouting Matches
The open-source tech was first announced in 2017, though development on it started a few years earlier. Some of the first sites to experiment with Perspective have been news publications such as The New York Times and sites such as Wikipedia. But recently, Perspective has found a home on sites like Reddit and comment platform Disqus (which is used on PCMag.com.)
CJ Adams, product manager for Perspective, said the project wanted to examine how people’s voices are silenced online. Jigsaw wanted to explore how targeted abuse or a general atmosphere of harassment can create a chilling effect, discouraging people to the point where they feel it’s not worth the time or energy to add their voice to a discussion. How often have you seen a tweet, post, or comment and chosen not to respond because fighting trolls and getting Mad Online just isn’t worth the aggravation?
“It’s very easy to ruin an online conversation,” said Adams. “It’s easy to jump in, but one person being really mean or toxic could drive other voices out. Maybe 100 people read an article or start a debate, and often you end up with the loudest voices in the room being the only ones left, in an internet that’s optimized for likes and shares. So you kind of silence all these voices. Then what’s defining the debate is just the loudest voice in the room—the shouting match.”