TechTech newsTechnology

A Battlefield AI Company Says It’s One of the Good Guys | WIRED

Instead that slogan says less about what the company does and more about why it’s doing it. Helsing’s job adverts brim with idealism, calling for people with a conviction that “democratic values are worth protecting.”

Helsing’s three founders speak about Russia’s invasion of Crimea in 2014 as a wake-up call that the whole of Europe needed to be ready to respond to Russian aggression. “I became increasingly concerned that we are falling behind the key technologies in our open societies,” Reil says. That feeling grew as he watched, in 2018, Google employees protest against a deal with the Pentagon, in which Google would have helped the military use AI to analyze drone footage. More than 4,000 staff signed a letter arguing it was morally and ethically irresponsible for Google to aid military surveillance, and its potentially lethal outcomes. In response, Google said it wouldn’t renew the contract.

“I just didn’t understand the logic of it,” Reil says. “If we want to live in open and free societies, be who we want to be and say what we want to say, we need to be able to protect them. We can’t take them for granted.” He worried that if Big Tech, with all its resources, were dissuaded from working with the defense industry, then the West would inevitably fall behind. “I felt like if they’re not doing it, if the best Google engineers are not prepared to work on this, who is?”

It’s usually hard to tell if defense products work the way their creators say they do. Companies selling them, Helsing included, claim it would compromise their tools’ effectiveness to be transparent about the details. But as we talk, the founders try to project an image of what makes its AI compatible with the democratic regimes it wants to sell to. “We really, really value privacy and freedom a lot, and we would never do things like face recognition,” says Scherf, claiming that the company wants to help militaries recognize objects, not people. “There’s certain things that are not necessary for the defense mission.”

But creeping automation in a deadly industry like defense still raises thorny issues. If all Helsing’s systems offer is increased battlefield awareness that helps militaries understand where targets are, that doesn’t pose any problems, says Herbert Lin, a senior research scholar at Stanford University’s Center for International Security and Cooperation. But once this system is in place, he believes, decisionmakers will come under pressure to connect it with autonomous weapons. “Policymakers have to resist the idea of doing that,” Lin says, adding that humans, not machines, need to be accountable when mistakes happen. If AI “kills a tractor rather than a truck or a tank, that’s bad. Who’s going to be held responsible for that?”

Riel insists that Helsing does not make autonomous weapons. “We make the opposite,” he says. “We make AI systems that help humans better understand the situation.”

Although operators can use Helsing’s platform to take down a drone, right now it is a human that makes that decision, not the AI. But there are questions about how much autonomy humans really have when they work closely with machines. “The less you make users understand the tools they’re working with, they treat them like magic,” says Jensen of the Center for Strategic and International Studies, claiming this means military users can either trust AI too much or too little.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button