artificial general intelligencebrad lightcapchatgpted zitronGadgetGadgetsgenerative pre trained transformergooglegpt 4ilya sutskeverjan leikeleikemira muratiopenairemoval of sam altman from openaisam altmanscarlett johanssonsorasundar pichai

Sam Altman’s ‘Inconsistent Candor’ Is Showing

When OpenAI’s board fired Sam Altman in late 2023, the board members said he “was not consistently candid in his communications.” The statement raised more questions than answers, indirectly calling Sam Altman a liar, but about what exactly? Six months later, creatives and former employees are once again asking the public to question OpenAI’s trustworthiness.

This month, OpenAI claimed ChatGPT’s voice, Sky, never intended to resemble Scarlett Johansson from Her, causing the award-winning actress to issue a damning public statement and threaten legal action. The voice in question is now taken down. Also this month, two big names in the AI community who led a safety team within OpenAI quit. One of the executives, Jan Leike, said on his way out that OpenAI’s “safety culture and processes have taken a backseat to shiny products.” As Ed Zitron writes, it’s becoming harder and harder to take OpenAI at face value.

Firstly, the claim that Sky doesn’t sound like Johansson is literally unbelievable. Gizmodo wrote an article claiming it sounded like the movie Her right after the launch of GPT-4 Omni, as did many other publications. OpenAI executives seemed to jokingly hint at the likeness around the launch. Altman tweeted the word “her” on that day. OpenAI’s Audio AGI Research Lead has a screenshot from the film as his background on X. We all could see what OpenAI was going for. Secondly, Johansson claims Altman approached her twice about voicing ChatGPT’s audio assistant. OpenAI says Sky was a different actor altogether, but the claim strikes many as disingenuous.

Last week, Altman said he was “embarrassed” about not knowing his company forced employees to stay quiet about any bad experiences at OpenAI for life or give up their equity. The lifelong non-disparagement agreement was revealed by a Vox report, speaking to one former OpenAI employee who refused to sign it. While many companies have non-disclosure agreements, it’s not every day you see one this extreme.

Altman said in an interview back in January that he didn’t know whether OpenAI’s Chief Scientist Ilya Sutskever was working at the company. Just last week Sutskever and his co-lead of Superalignment, Leike, quit OpenAI. Leike said Superalignment’s resources were being siphoned away to other parts of the company for months.

In March, Chief Technology Officer Mira Murati said she wasn’t sure whether Sora was trained on YouTube videos. Chief Operating Officer Brad Lightcap doubled down on this confusion by dodging a question about it at Bloomberg’s Tech Summit in May. Despite that, The New York Times reports that senior members of OpenAI were involved in transcribing YouTube videos to train AI models. On Monday, Google CEO Sundar Pichai told The Verge that if OpenAI did train on YouTube videos, that would not be appropriate.

Ultimately, OpenAI is shrouded in mystery, and the question of Sam Altman’s “inconsistent candor” just won’t go away. This may be damaging OpenAI’s reputation, but the mystery also works in OpenAI’s favor. The company has painted itself as a secretive startup with the key to our futuristic world, and in doing so, OpenAI has successfully captured our collective attention. Throughout the mystery, the company has continued to ship cutting-edge AI products. That said, it’s hard not to have skepticism about the communications from OpenAI, a company built on the premise of being “open.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button