aiartificial intelligencebill gatesbrad smithcharles rivkinchuck schumercomputational neurosciencecyberneticsdeb rajielon muskemerging technologieseric schmidtethics of artificial intelligenceGadgetGadgetsgenerative artificial intelligencegooglejensen huangmark zuckerbergmeredith stiehmmicrosoftnvidiateslatime person of the yeartristan harristwitterxaizuck

What Big Tech Said at the White House Summit

The U.S. federal government is still swimming in circles trying to form some sort of plan to regulate the exploding AI industry. So when the usual suspects of big tech again returned to Capitol Hill on Wednesday for a closed-door meeting on potential AI regulation, they came prepared with the same talking points they’ve been presenting for the last several years, though with an added air of haste to the proceedings.

At the artificial intelligence forum hosted by Senate Majority Leader Chuck Schumer, the big boys all laid their cards on the table, hoping to get the kind of AI regulations they want. Elon Musk, who recently established the late-to-the-party company xAI, again reiterated his stance that AI threatens humanity, according to the Wall Street Journal in a conversation with Schumer after the fact. It’s the same position he’s held for years, though it won’t stop the multi-billionaire from using data harvested from Twitter and Tesla for training his upcoming AI models.

According to CBS News, Musk told reporters that AI companies need a “referee,” referring to the potential that big government would act as the middle manager for big tech’s latest foray into transformative technology. Of course, there’s a wide variety of opinions there. Good old Bill Gates, the original co-founder of Microsoft, went full tech evangelist reportedly saying that generative AI systems will—somehow—end world hunger.

The summit was headlined by the big tech execs of today and yesteryear, including the likes of Nvidia co-founder Jensen Huang and former Google CEO Eric Schmidt. There were some tech critics there as well as union leaders, such as Writers Guild president Meredith Stiehm. The guild is currently on strike partially due to film studios’ desire to use AI to underpay writers. They were sitting across the table from Charles Rivkin, the CEO of the Motion Picture Association. Rivkin and his group aren’t necessarily involved in negotiations, though it paints a picture of just how widespread the concerns over AI have become.

The few outside tech researchers had the task of taking many of Musk’s and Gates’s comments back down to earth. Mozilla Foundation fellow Deb Raji tweeted saying they spent most of their time at the meeting fact-checking claims about what AI could actually do.

Screenshot: X/Twitter

At the summit, Meta CEO Mark Zuckerberg got into it with Tristan Harris, who leads the nonprofit Center for Humane Technology, over the company’s use of supposed open-source AI. Harris reportedly claimed his center was able to manipulate Meta’s Llama 2 AI language model to give instructions for creating dangerous biological compounds. Zuckerberg reportedly tried to handwave the critique saying that information is already available on the internet.

According to a transcript of Zuckerberg’s comments released by Meta, Zuck also tried touting his company’s push for open source, as it “democratizes” these AI tools. He said the two big issues at hand are “safety” and responsible use of AI and “access” to AI to create “opportunity in the future.” And despite Zuckerberg continually touting the open nature of his AI models, they really aren’t all that open. The nonprofit advocacy group Open Source Initiative has drilled down on Meta’s actual licenses, noting they only authorize “some commercial uses.”

Schumer also said that everybody involved in the summit, whether they were tech moguls or advocacy groups, all agreed the government needed some sort of role in regulating the advent of AI. The Senate majority leader claimed the tech leaders understood that, even if they install guardrails on their AI models, “they’ll have competitors who won’t.”

What’s already clear is that tech companies want AI regulation. Doing so gives them clear instructions for how to proceed, but it also means they have walls to hide behind when something inevitably goes wrong. Regulations may also make it that much harder for new startups to compete against the tech giants. Microsoft President Brad Smith endorsed federal licensing and a new agency for policing AI platforms. According to Politico, Smith said a licensing regime would ensure “a certain baseline of safety, of capability,” and companies would essentially need to “prove” they’re able to operate their AI under the law.

While Microsoft was trying to pull up the ladder before other companies could reach its current heights, the likes of Google and other tech giants would prefer a softer touch. It’s what they’re currently getting under the auspices of the White House and its voluntary commitments for developing ethical AI.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button