Twitter’s Moderation System Is in Tatters
“Me and other people who have tried to reach out have gotten dead ends,” Benavidez says. “And when we’ve reached out to those who are supposedly still at Twitter, we just don’t get a response.”
Even when researchers can get through to Twitter, responses are slow—sometimes taking more than a day. Jesse Littlewood, vice president of campaigns at the nonprofit Common Cause, says he’s noticed that when his organization reports tweets that clearly violate Twitter’s policies, those posts are now less likely to get taken down.
The volume of content that users and watchdogs may want to report to Twitter is likely to increase. Many of the staff and contractors laid off in recent weeks worked on teams like trust and safety, policy, and civic integrity, all of which worked to keep disinformation and hate speech off the platform.
Melissa Ingle was a senior data scientist on Twitter’s civic integrity team until she was fired along with 4,400 other contractors on November 12. She wrote and monitored algorithms used to detect and remove political misinformation on Twitter—most recently, that meant the elections in the US and Brazil. Of the 30 people on her team, only 10 remain, and many of the human content moderators, who review tweets and flag those that violate Twitter’s policies, have also been laid off. “Machine learning needs constant input, constant care,” she says. “We have to constantly update what we are looking for because political discourse changes all the time.”
Though Ingle’s job did not involve interacting with outside activists or researchers, she says members of Twitter’s policy team did. At times, information from external groups helped inform the terms or content Ingle and her team would train algorithms to identify. She now worries that with so many staffers and contractors laid off, there won’t be enough people to ensure the software remains accurate.
“With the algorithm not being updated anymore and the human moderators gone, there’s just not enough people to manage the ship,” Ingle says. “My concern is that these filters are going to get more and more porous, and more and more things are going to come through as the algorithms get less accurate over time. And there’s no human being to catch things going through the cracks.”
Within a day of Musk taking ownership of Twitter, Ingle says, internal data showed that the number of abusive tweets reported by users increased 50 percent. That initial spike died off a little, she says, but abusive content reports remained about 40 percent or so higher than the usual volume before the takeover.
Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University, also expects to see Twitter’s defenses against banned content wither. “Twitter has always struggled with this, but a number of talented teams had made real progress on these problems in recent months. Those teams have now been wiped out.”
Such concerns are echoed by a former content moderator who was a contractor for Twitter until 2020. The contractor, speaking anonymously to avoid repercussions from his current employer, says all the former colleagues doing similar work whom he was in touch with have been fired. He expects the platform to become a much less nice place to be. “It’ll be horrible,” he says. “I have actively searched the worst parts of Twitter—the most racist, most horrible, most degenerate parts of the platform. That’s what’s going to be amplified.”
Source link