Elon Musk’s Twitter Is Making Meta Look Smart
It was the first day of April 2022, and I was sitting in a law firm’s midtown Manhattan conference room at a meeting of Meta’s Oversight Board, the independent body the scrutinizes its content decisions. And for a few minutes, it seemed that despair had set in.
The topic at hand was Meta’s controversial Cross Check program, which gave special treatment to posts from certain powerful users—celebrities, journalists, government officials, and the like. For years this program operated in secret, and Meta even misled the board on its scope. When details of the program were leaked to The Wall Street Journal, it became clear that millions of people received that special treatment, meaning their posts were less likely to be taken down when reported by algorithms or other users for breaking rules against things like hate speech. The idea was to avoid mistakes in cases where errors would have more impact—or embarrass Meta—because of the prominence of the speaker. Internal documents showed that Meta researchers had qualms about the project’s propriety. Only after that exposure did Meta ask the board to take a look at the program and recommend what the company should do with it.
The meeting I witnessed was part of that reckoning. And the tone of the discussion led me to wonder if the board would suggest that Meta shut down the program altogether, in the name of fairness. “The policies should be for all the people!” one board member cried out.
That didn’t happen. This week the social media world took a pause from lookie-looing the operatic content-moderation train wreck that Elon Musk is conducting at Twitter, as the Oversight Board finally delivered its Cross Check report, delayed because of foot-dragging by Meta in providing information. (It never did provide the board with a list identifying who got special permission to stave off a takedown, at least until someone took a closer look at the post.) The conclusions were scathing. Meta claimed that the program’s purpose was to improve the quality of its content decisions, but the board determined that it was more to protect the company’s business interests. Meta never set up processes to monitor the program and assess whether it was fulfilling its mission. The lack of transparency to the outside world was appalling. Finally, all too often Meta failed to deliver the quick personalized action that was the reason those posts were spared quick takedowns. There were simply too many of those cases for Meta’s team to handle. They frequently remained up for days before being given secondary consideration.
The prime example, featured in the original WSJ report, was a post from Brazilian soccer star Neymar, who posted a sexual image without its subject’s consent in September 2019. Because of the special treatment he got from being in the Cross Check elite, the image—a flagrant policy violation—garnered over 56 million views before it was finally removed. The program meant to reduce the impact of content decision mistakes wound up boosting the impact of horrible content.
Yet the board didn’t recommend that Meta shut down Cross Check. Instead, it called for an overhaul. The reasons are in no way an endorsement of the program but an admission of the devilish difficulty of content moderation. The subtext of the Oversight Board’s report was the hopelessness of believing it was possible to get things right. Meta, like other platforms that give users voice, had long emphasized growth before caution and hosted huge volumes of content that would require huge expenditures to police. Meta does spend many millions on moderation—but still makes millions of errors. Seriously cutting down on those mistakes costs more than the company is willing to spend. The idea of Cross Check is to minimize the error rate on posts from the most important or prominent people. When a celebrity or statesman used its platform to speak to millions, Meta didn’t want to screw up.
Source link