TechTech newsTechnology

Everyone Wants to Regulate AI. No One Can Agree How

I agree with every single one of those points, which can potentially guide us on the actual boundaries we might consider to mitigate the dark side of AI. Things like sharing what goes into training large language models like those behind ChatGPT, and allowing opt-outs for those who don’t want their content to be part of what LLMs present to users. Rules against built-in bias. Antitrust laws that prevent a few giant companies from creating an artificial intelligence cabal that homogenizes (and monetizes) pretty much all the information we receive. And protection of your personal information as used by those know-it-all AI products.

But reading that list also highlights the difficulty of turning uplifting suggestions into actual binding law. When you look closely at the points from the White House blueprint, it’s clear that they don’t just apply to AI, but pretty much everything in tech. Each one seems to embody a user right that has been violated since forever. Big tech wasn’t waiting around for generative AI to develop inequitable algorithms, opaque systems, abusive data practices, and a lack of opt-outs. That’s table stakes, buddy, and the fact that these problems are being brought up in a discussion of a new technology only highlights the failure to protect citizens against the ill effects of our current technology.

During that Senate hearing where Altman spoke, senator after senator sang the same refrain: We blew it when it came to regulating social media, so let’s not mess up with AI. But there’s no statute of limitations on making laws to curb previous abuses. The last time I looked, billions of people, including just about everyone in the US who has the wherewithal to poke a smartphone display, are still on social media, bullied, privacy compromised, and exposed to horrors. Nothing prevents Congress from getting tougher on those companies and, above all, passing privacy legislation.

The fact that Congress hasn’t done this casts severe doubt on the prospects for an AI bill. No wonder that certain regulators, notably FTC chair Lina Khan, isn’t waiting around for new laws. She’s claiming that current law provides her agency plenty of jurisdiction to take on the issues of bias, anticompetitive behavior, and invasion of privacy that new AI products present.

Meanwhile, the difficulty of actually coming up with new laws—and the enormity of the work that remains to be done—was highlighted this week when the White House issued an update on that AI Bill of Rights. It explained that the Biden administration is breaking a big-time sweat on coming up with a national AI strategy. But apparently the “national priorities” in that strategy are still not nailed down.

Now the White House wants tech companies and other AI stakeholders—along with the general public—to submit answers to 29 questions about the benefits and risks of AI. Just as the Senate subcommittee asked Altman and his fellow panelists to suggest a path forward, the administration is asking corporations and the public for ideas. In its request for information, the White House promises to “consider each comment, whether it contains a personal narrative, experiences with AI systems, or technical legal, research, policy, or scientific materials, or other content.” (I breathed a sigh of relief to see that comments from large language models are not being solicited, though I’m willing to bet that GPT-4 will be a big contributor despite this omission.)


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button