This AI Helps Police Monitor Social Media. Does It Go Too Far?
Since 2016, civil liberties groups have raised alarms about online surveillance of social media chatter by city officials and police departments. Services like Media Sonar, Social Sentinel, and Geofeedia analyze online conversations, clueing in police and city leaders to what hundreds of thousands of users are saying online.
Zencity, an Israeli data-analysis firm that serves 200 agencies across the US, markets itself as a less invasive alternative, because it offers only aggregate data and forbids targeted surveillance of protests. Cities like Phoenix, New Orleans, and Pittsburgh say they use the service to combat misinformation and gauge public reaction to topics like social distancing enforcement or traffic laws.
Speaking to WIRED, CEO Eyal Feder-Levy describes the service’s built-in privacy safeguards, like redacting personal information, as a new approach to community engagement. Still, local officials who use Zencity describe a variety of new and potentially alarming uses for the tool, which some cities use without a public approval process, often through free trials.
Brandon Talsma, a county supervisor in Jasper County, Iowa, describes 72 intense hours last September that began with a warning from Zencity. His office had been using the tool for only a few months when Zencity’s analysts noticed a sudden increase in social media chatter about Jasper County following news reports of a gruesome killing.
A 44-year-old Black man living in the city of Grinnell, which is 92 percent white, had been found dead in a ditch, his body wrapped in blankets and set alight. Early news reports fixated on the grim details, and rumors spread that the man had been lynched by Grinnell residents.
“We’re a small county; we’ve got very limited assets and resources,” Talsma said. “It had the recipe to turn very ugly.”
Zencity noted that almost none of the online chatter originated in Iowa. Talsma’s team was afraid the rumors could snowball into the type of misinformation that causes violence. Talsma said the team hadn’t considered the racial optics until Zencity alerted them to the discussion online.
Police say the killing wasn’t racially motivated, and they called a press conference at which Iowa-Nebraska NAACP president Betty Andrews supported that finding. Police have since identified and charged four suspects, three white men and one white woman, in connection with the case.
Zencity creates custom reports for city officials and law enforcement, using machine learning to scan public conversations from social media, messaging boards, local news reports, and 311 calls, promising insights on how residents are responding to a particular topic. Firms like Meltwater and Brandwatch similarly track keyword phrases for corporate clients, but don’t bar users from seeing individual profiles.
This has been a powerful tool for local law enforcement agencies across the country, who are still responding to the nationwide debate on police reform as well as a recent spike in crime in major cities.
As long as critics are having these discussions on a public channel, Zencity can pick up and produce reports on what they’re saying. It does not have full access to the “fire hose” of everything discussed on Facebook and Twitter, but it continuously runs customized searches of the social media platforms to examine and weigh sentiment.
“If they’re going to meet at this location or that location, that’s all publicly available information, and it’s free for anyone to review,” explains Sheriff Tony Spurlock in Douglas County, Colorado, south of Denver. He says the sheriff’s office has used the tool for roughly a year, signing a $72,000 contract in early 2021. The tool delivers aggregate information and doesn’t identify individual users.
Agencies are warned about prohibited uses, says Feder-Levy. He says the software alerts the company if clients are using the service to target individuals or groups, as has happened elsewhere. In 2016, for example, Baltimore police tracked phrases like #MuslimLivesMatter, #DontShoot, and #PoliceBrutality.
Spurlock says the software proved useful after prosecutors in April concluded two officers were justified in shooting a man last December. Details of the shooting are complex: The man was armed with a knife, but he had struggled for years with bipolar depression and called 911 himself. Dispatch told the officers they were responding to an urgent domestic violence call, but the man’s wife describes the call as a wellness check and claims police fired almost immediately after arriving.
Source link