These Ex-Journalists Are Using AI to Catch Online Defamation
The insight driving CaliberAI is that this universe is a bounded infinity. While AI moderation is nowhere close to being able to decisively rule on truth and falsity, it should be able to identify the subset of statements that could even potentially be defamatory.
Carl Vogel, a professor of computational linguistics at Trinity College Dublin, has helped CaliberAI build its model. He has a working formula for statements highly likely to be defamatory: They must implicitly or explicitly name an individual or group; present a claim as fact; and use some sort of taboo language or idea—like suggestions of theft, drunkenness, or other kinds of impropriety. If you feed a machine-learning algorithm a large enough sample of text, it will detect patterns and associations among negative words based on the company they keep. That will allow it to make intelligent guesses about which terms, if used about a specific group or person, place a piece of content into the defamation danger zone.
Logically enough, there was no data set of defamatory material sitting out there for CaliberAI to use, because publishers work very hard to avoid putting that stuff into the world. So the company built its own. Conor Brady started by drawing on his long experience in journalism to generate a list of defamatory statements. “We thought about all the nasty things that could be said about any person and we chopped, diced, and mixed them until we’d kind of run the whole gamut of human frailty,” he says. Then a group of annotators, overseen by Alan Reid and Abby Reynolds, a computational linguist and data linguist on the team, used the original list to build up a larger one. They use this made-up data set to train the AI to assign probability scores to sentences, from 0 (definitely not defamatory) to 100 (call your lawyer).
The result, so far, is something like spell-check for defamation. You can play with a demo version on the company’s website, which cautions that “you may notice false positives/negatives as we refine our predictive models.” I typed in “I believe John is a liar,” and the program spit out a probability of 40, below the defamation threshold. Then I tried “Everyone knows John is a liar,” and the program spit out a probability of 80 percent, flagging “Everyone knows” (statement of fact), “John” (specific person), and “liar” (negative language). Of course, that doesn’t quite settle the matter. In real life, my legal risk would depend on whether I can prove that John really is a liar.
“We are classifying on a linguistic level and returning that advisory to our customers,” says Paul Watson, the company’s chief technology officer. “Then our customers have to use their many years of experience to say, ‘Do I agree with this advisory?’ I think that’s a very important fact of what we’re building and trying to do. We’re not trying to build a ground-truth engine for the universe.”
It’s fair to wonder whether professional journalists really need an algorithm to warn that they might be defaming someone. “Any good editor or producer, any experienced journalist, ought to know it when he or she sees it,” says Sam Terilli, a professor at the University of Miami’s School of Communication and the former general counsel of the Miami Herald. “They ought to be able to at least identify those statements or passages that are potentially risky and worthy of a deeper look.”
That ideal might not always be in reach, however, especially during a period of thin budgets and heavy pressure to publish as quickly as possible.
“I think there’s a really interesting use case with news organizations,” says Amy Kristin Sanders, a media lawyer and journalism professor at the University of Texas. She points out the particular risks involved with reporting on breaking news, when a story might not go through a thorough editorial process. “For small- to medium-size newsrooms—who don’t have a general counsel present with them on a daily basis, who may rely on lots of freelancers, and who may be short staffed, so content is getting less of an editorial review than it has in the past—I do think there could be value in these kinds of tools.”
Source link