The Scramble to Save Twitter’s Research From Elon Musk
two years ago, Twitter launched what is perhaps the tech industry’s most ambitious attempt at algorithmic transparency. Its researchers wrote papers showing that Twitter’s AI system for cropping images in tweets favored white faces and women, and that posts from the political right in several countries, including the US, UK, and France, received a bigger algorithmic boost than those from the left.Â
By early October last year, as Elon Musk faced a court deadline to complete his $44 billion acquisition of Twitter, the company’s newest research was almost ready. It showed that a machine-learning program incorrectly demoted some tweets mentioning any of 350 terms related to identity, politics, or sexuality, including “gay,” “Muslim,” and “deaf,” because a system intended to limit views of tweets slurring marginalized groups also impeded posts celebrating those communities. The finding—and a partial fix Twitter developed—could help other social platforms better use AI to police content. But would anyone ever get to read the research?Â
Musk had months earlier supported algorithmic transparency, saying he wanted to “open-source” Twitter’s content recommendation code. On the other hand, Musk had said he would reinstate popular accounts permanently banned for rule-breaking tweets. He also had mocked some of the same communities that Twitter’s researchers were seeking to protect and complained about an undefined “woke mind virus.” Additionally disconcerting, Musk’s AI scientists at Tesla generally have not published research.
Twitter’s AI ethics researchers ultimately decided their prospects were too murky under Musk to wait to get their study into an academic journal or even to finish writing a company blog post. So less than three weeks before Musk finally assumed ownership on October 27, they rushed the moderation bias study onto the open-access service Arxiv, where scholars post research that has not yet been peer reviewed.
“We were rightfully worried about what this leadership change would entail,” says Rumman Chowdhury, who was then engineering director on Twitter’s Machine Learning Ethics, Transparency, and Accountability group, known as META. “There’s a lot of ideology and misunderstanding about the kind of work ethics teams do as being part of some like, woke liberal agenda, versus actually being scientific work.”
Concern about the Musk regime spurred researchers throughout Cortex, Twitter’s machine-learning and research organization, to stealthily publish a flurry of studies much sooner than planned, according to Chowdhury and five other former employees. The results spanned topics including misinformation and recommendation algorithms. The frantic push and the published papers have not been previously reported.
The researchers wanted to preserve the knowledge discovered at Twitter for anyone to use and make other social networks better. “I feel very passionate that companies should talk more openly about the problems that they have and try to lead the charge, and show people that it’s like a thing that is doable,” says Kyra Yee, lead author of the moderation paper.
Twitter and Musk did not respond to a detailed request by email for comment for this story.
The team on another study worked through the night to make final edits before hitting Publish on Arxiv the day Musk took Twitter, one researcher says, speaking anonymously out of fear of retaliation from Musk. “We knew the runway would shut down when the Elon jumbo jet landed,” the source says. “We knew we needed to do this before the acquisition closed. We can stick a flag in the ground and say it exists.”
The fear was not misplaced. Most of Twitter’s researchers lost their jobs or resigned under Musk. On the META team, Musk laid off all but one person on November 4, and the remaining member, cofounder and research lead Luca Belli, quit later in the month.Â