Google DeepMind CEO Demis Hassabis Says Its Next Algorithm Will Eclipse ChatGPT
In 2014, DeepMind was acquired by Google after demonstrating striking results from software that used reinforcement learning to master simple video games. Over the next several years, DeepMind showed how the technique does things that once seemed uniquely human—often with superhuman skill. When AlphaGo beat Go champion Lee Sedol in 2016, many AI experts were stunned, because they had believed it would be decades before machines would become proficient at a game of such complexity.
New Thinking
Training a large language model like OpenAI’s GPT-4 involves feeding vast amounts of curated text from books, webpages, and other sources into machine learning software known as a transformer. It uses the patterns in that training data to become proficient at predicting the letters and words that should follow a piece of text, a simple mechanism that proves strikingly powerful at answering questions and generating text or code.
An important additional step in making ChatGPT and similarly capable language models is using reinforcement learning based on feedback from humans on an AI model’s answers to finesse its performance. DeepMind’s deep experience with reinforcement learning could allow its researchers to give Gemini novel capabilities.
Hassabis and his team might also try to enhance large language model technology with ideas from other areas of AI. DeepMind researchers work in areas ranging from robotics to neuroscience, and earlier this week the company demonstrated an algorithm capable of learning to perform manipulation tasks with a wide range of different robot arms.
Learning from physical experience of the world, as humans and animals do, is widely expected to be important to making AI more capable. The fact that language models learn about the world indirectly, through text, is seen by some AI experts as a major limitation.
Murky Future
Hassabis is tasked with accelerating Google’s AI efforts while also managing unknown and potentially grave risks. The recent, rapid advancements in language models have made many AI experts—including some building the algorithms—worried about whether the technology will be put to malevolent uses or become difficult to control. Some tech insiders have even called for a pause on the development of more powerful algorithms to avoid creating something dangerous.
Hassabis says the extraordinary potential benefits of AI—such as for scientific discovery in areas like health or climate—make it imperative that humanity does not stop developing the technology. He also believes that mandating a pause is impractical, as it would be near impossible to enforce. “If done correctly, it will be the most beneficial technology for humanity ever,” he says of AI. “We’ve got to boldly and bravely go after those things.”
That doesn’t mean Hassabis advocates AI development proceeds in a headlong rush. DeepMind has been exploring the potential risks of AI since before ChatGPT appeared, and Shane Legg, one of the company’s cofounders, has led an “AI safety” group within the company for years. Hassabis joined other high-profile AI figures last month in signing a statement warning that AI might someday pose a risk comparable to nuclear war or a pandemic.
One of the biggest challenges right now, Hassabis says, is to determine what the risks of more capable AI are likely to be. “I think more research by the field needs to be done—very urgently—on things like evaluation tests,” he says, to determine how capable and controllable new AI models are. To that end, he says, DeepMind may make its systems more accessible to outside scientists. “I would love to see academia have early access to these frontier models,” he says—a sentiment that if followed through could help address concerns that experts outside big companies are becoming shut out of the newest AI research.
How worried should you be? Hassabis says that no one really knows for sure that AI will become a major danger. But he is certain that if progress continues at its current pace, there isn’t much time to develop safeguards. “I can see the kinds of things we’re building into the Gemini series right, and we have no reason to believe that they won’t work,” he says.
Source link