TechTech newsTechnology

On Social Media, American-Style Free Speech Is Dead

For the first time in human history, we can measure a lot of this stuff with an exciting amount of precision. All of this data exists, and companies are constantly evaluating, What are the effects of our rules? Every time they make a rule they test its enforcement effects and possibilities. The problem is, of course, it’s all locked up. Nobody has any access to it, except for the people in Silicon Valley. So it’s super exciting but also super frustrating.

This ties into maybe the most interesting thing for me in your paper, which is the concept of probabilistic thinking. A lot of coverage and discussion about content moderation focuses on anecdotes, as humans are wont to do. Like, “This piece of content, Facebook said it was it wasn’t allowed, but it was viewed 20,000 times.” A point that you make in the paper is, perfect content moderation is impossible at scale unless you just ban everything, which nobody wants. You have to accept that there will be an error rate. And every choice is about which direction you want the error rate to go: Do you want more false positives or more false negatives?

The problem is that if Facebook comes out and says, “Oh, I know that that looks bad, but actually, we got rid of 90 percent of the bad stuff,” that doesn’t really satisfy anyone, and I think one reason is that we are just stuck taking these companies words for it.

Totally. We have no idea at all. We’re left at the mercy of that sort of statement in a blog post.

But there’s a grain of truth. Like, Mark Zuckerberg has this line that he’s rolling out all the time now in every Congressional testimony and interview. It’s like, the police don’t solve all crime, you can’t have a city with no crime, you can’t expect a perfect sort of enforcement. And there is a grain of truth in that. The idea that content moderation will be able to impose order on the entire messiness of human expression is a pipe dream, and there is something quite frustrating, unrealistic, and unproductive about the constant stories that we read in the press about, Here’s an example of one error, or a bucket of errors, of this rule not being perfectly enforced.

Because the only way that we would get perfect enforcement of rules would be to just ban anything that looks remotely like something like that. And then we would have onions getting taken down because they look like boobs, or whatever it is. Maybe some people aren’t so worried about free speech for onions, but there are other worse examples.

No, as someone who watches a lot of cooking videos—

That would be a high cost to pay, right?

I look at far more images of onions than breasts online, so that would really hit me hard.

Yeah, exactly, so the free-speech-for-onions caucus is strong.

I’m in it.

We have to accept errors in one way or the other. So the example that I use in my paper is in the context of the pandemic. I think this is a super useful one, because it makes it really clear. At the start of the pandemic, the platforms had to send their workers home like everyone else, and this means they had to ramp up their reliance on the machines. They didn’t have as many humans doing checking. And for the first time, they were really candid about the effects of that, which is, “Hey, we’re going to make more mistakes.” Normally, they come out and they say, “Our machines, they’re so great, they’re magical, they’re going to clean all this stuff up.” And then for the first time they were like, “By the way, we’re going to make more mistakes in the context of the pandemic.” But the pandemic made the space for them to say that, because everyone was like, “Fine, make mistakes! We need to get rid of this stuff.” And so they erred on the side of more false positives in taking down misinformation, because the social cost of not using the machines at all was far too high and they couldn’t rely on humans.

In that context, we accepted the error rate. We read stories in the press about how, like, back in the time when masks were bad, and they were banning mask ads, their machines accidentally over-enforced this and also took down a bunch of volunteer mask makers, because the machines were like, “Masks bad; take them down.” And it’s like, OK, it’s not ideal, but at the same time, what choice do you want them to make there? At scale, where there’s literally billions of decisions, all the time, there are some costs, and we were freaking out about the mask ads, and so I think that that’s a more reasonable trade off to make.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button