TechTech newsTechnology

Better Than Nothing: A Look at Content Moderation in 2020

“I don’t think it’s right for a private company to censor politicians or the news in a democracy.”—Mark Zuckerberg, October 17, 2019

“Facebook Removes Trump’s Post About Covid-19, Citing Misinformation Rules”—The Wall Street Journal, October 6, 2020

For more than a decade, the attitude of the biggest social media companies toward policing misinformation on their platforms was best summed up by Mark Zuckerberg’s oftrepeated warning: “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.” Even after the 2016 election, as Facebook, Twitter, and YouTube faced growing backlash for their role in the dissemination of conspiracy theories and lies, the companies remained reluctant to take action against it.

Then came 2020.

Under pressure from politicians, activists, and media, Facebook, Twitter, and YouTube all made policy changes and enforcement decisions this year that they had long resisted—from labeling false information from prominent accounts to attempting to thwart viral spread to taking down posts by the president of the United States. It’s hard to say how successful these changes were, or even how to define success. But the fact that they took the steps at all marks a dramatic shift.

“I think we’ll look back on 2020 as the year when they finally accepted that they have some responsibility for the content on their platforms,” said Evelyn Douek, an affiliate at Harvard’s Berkman Klein Center for Internet and Society. “They could have gone farther, there’s a lot more that they could do, but we should celebrate that they’re at least in the ballgame now.”

Social media was never a total free-for-all; platforms have long policed the illegal and obscene. What emerged this year was a new willingness to take action against certain types of content simply because it is false—expanding the categories of prohibited material and more aggressively enforcing the policies already on the books. The proximate cause was the coronavirus pandemic, which layered an information crisis atop a public health emergency. Social media executives quickly perceived their platforms’ potential to be used as vectors of lies about the coronavirus that, if believed, could be deadly. They vowed early on to both try to keep dangerously false claims off their platforms and direct users to accurate information.

One wonders whether these companies foresaw the extent to which the pandemic would become political, and Donald Trump the leading purveyor of dangerous nonsense—forcing a confrontation between the letter of their policies and their reluctance to enforce the rules against powerful public officials. By August, even Facebook would have the temerity to take down a Trump post in which the president suggested that children were “virtually immune” to the coronavirus.

“Taking things down for being false was the line that they previously wouldn’t cross,” said Douek. “Before that, they said, ‘falsity alone is not enough.’ That changed in the pandemic, and we started to see them being more willing to actually take down things, purely because they were false.”

Nowhere did public health and politics interact more combustibly than in the debate over mail-in voting, which arose as a safer alternative to in-person polling places—and was immediately demonized by Trump as a Democratic scheme to steal the election. The platforms, perhaps eager to wash away the bad taste of 2016, tried to get ahead of the vote-by-mail propaganda onslaught. It was mail-in voting that led Twitter to break the seal on applying a fact-checking label to a tweet by Trump, in May, that made false claims about California’s mail-in voting procedure.

This trend reached its apotheosis in the run-up to the November election, as Trump broadcast his intention to challenge the validity of any votes that went against him. In response, Facebook and Twitter announced elaborate plans to counter that push, including adding disclaimers to premature claims of victory and specifying which credible organizations they would rely on to validate the election results. (YouTube, notably, did much less to prepare.) Other moves included restricting political ad-buying on Facebook, increasing the use of human moderation, inserting trustworthy information into users’ feeds, and even manually intervening to block the spread of potentially misleading viral disinformation. As the New York Times writer Kevin Roose observed, these steps “involved slowing down, shutting off or otherwise hampering core parts of their products — in effect, defending democracy by making their apps worse.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button