AI pioneer Geoffrey Hinton: Afraid of the spirits he summoned

Without the British-Canadian researcher, deep neural networks would hardly have caught on. But now ex-Google employee Hinton is questioning his work.

Geoffrey Hinton lives in a house on a pretty street in north London. The man is a pioneer in the field of deep learning, he helped develop some of the most important techniques that are at the current heart of modern artificial intelligence. After leaving university, he worked for the Internet giant for ten years. But now he doesn’t want to anymore. And that has a very specific reason: he’s worried about the future of AI.

Hinton says he’s amazed at what large language models (LLMs) like GPT-4, on which the current ChatGPT is based, can do. And he sees serious risks that the technology – which would hardly be where it is today without him – entails.

Visibly moved

The conversation started at Hinton’s kitchen table, but the British-Canadian AI veteran was pacing the entire time. Having been plagued by chronic back pain for years, Hinton almost never sits down. For the next hour, he could be seen pacing from one end of the room to the other, bobbing his head as he spoke. He had a lot to say.

The 75-year-old computer scientist, who shared the 2018 Turing Award with Yann LeCun and Yoshua Bengio for his work in deep learning — specifically deep neural networks, or DNN for short — said he was now ready to launch a to change gear. “I’m getting too old for technical work where you have to remember a lot of details,” he told me. “I’m still good, but I’m not as good as I used to be, and of course that’s annoying.” But that’s not the only reason he’s leaving Google. Hinton now wants to spend his time doing what he calls a “more philosophical work”. In doing so, he will concentrate on the small but very real danger that could prove for mankind.

No more consideration for Google

Once Hinton has left Google, he can speak his mind without the self-censorship that a man of managerial rank must exercise. “I want to talk about AI security issues without worrying about how this impacts Google’s business,” he says. “As long as I’m being paid by the company, I can’t do that.” That doesn’t mean Hinton is unhappy with Google at all. “It may surprise you,” he says, “there are a lot of good things I can say about Google. And that’s a lot more credible when I’m no longer on Google.”

Hinton’s perspective has been significantly changed by the new generation of large language models, notably OpenAI’s GPT-4, which came out in March. It made him realize that machines are on the way to becoming a lot smarter than he thought, he says. It worries him how this might develop. “These things are completely different from us,” he says. “Sometimes I think it’s like aliens landed and people didn’t notice because they speak English very well.”

Hinton is best known for his work on a technique called backpropagation, which he – along with two colleagues – proposed in the 1980s. In short, this is the algorithm that allows machines to really learn. It underlies almost all deep neural networks today, from computer vision systems for image recognition to large language models. It wasn’t until the 2010s that the power of neural networks trained with backpropagation really reached the point where they could be put to good use. Working with some students, Hinton then showed that the technique was better than anything else when it came to getting a computer to identify objects in images. They also trained a neural network

One of those graduate students then was Ilya Sutskever, who later co-founded OpenAI and led the development of ChatGPT, today he is the chief engineer there. “There were early inklings that this thing could be amazing,” says Hinton. “But it took us a long time to realize that to be really good, it had to be done on a really big scale.” In the ’80s, neural networks were more of a joke. The prevailing idea of ​​artificial intelligence at the time, the so-called symbolic AI, still assumed that intelligence consisted primarily of the processing of symbols such as words or numbers.

A new intelligence

Hinton wasn’t convinced of that – that approach – at the time. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing the way these neurons are connected – the numbers they represent – such a neural network can be “rewired” on the fly. In other words, it can be made to learn.

“My father was a biologist, so I was thinking in biological terms,” ​​says Hinton. And symbolic thinking is clearly not at the core of biological intelligence. “Crows can solve puzzles, but they don’t have language. They don’t do it by storing strings of characters and manipulating them. They do it by changing the strength of the connections between neurons in their brain. So it has to be possible, complicated Learning things by changing the strength of connections in an artificial neural network.”

For 40 years, Hinton saw artificial neural networks as a bad knock-off of biological neural networks. Now he thinks that’s changed: In trying to mimic biological brains, he says we’ve developed something very special. “It’s scary when you see that,” he says. “The switch is flipped all of a sudden.” Hinton’s fears will seem like science fiction to many readers. But it’s worth listening to his reasoning.

“We don’t expect them to babble like humans”

As the name suggests, large language models consist of huge neural networks with a large number of connections. But compared to the brain, they are still tiny. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, at most a trillion.” But GPT-4 knows hundreds of times more than any human. “So maybe it actually has a much better learning algorithm than we do.” Compared to brains, neural networks are widely considered to be rather inefficient at learning: it takes a lot of data and energy to train them. Brains, on the other hand, absorb new ideas and skills quickly, using only a fraction of the energy.

Is magic brain now also in the computer?

“People seemed to have some kind of magic,” says Hinton. “But as soon as you take one of these large language models and teach it something new, that argument suddenly collapses. It can learn new tasks extremely quickly.” Hinton speaks of “few-shot learning”, in which pre-trained neural networks, such as e.g. B. large language models, can be trained on something new with just a few examples. For example, he found that some of these LLMs can string together a series of logical statements into an argument, although they have never been directly trained to do so. If one compares a pre-trained large language model with a human in terms of learning speed in such a task, the advantage of the human disappears.

And what about the fact that large language models tend to just invent some things? Dubbed “hallucinations” by AI researchers (Hinton prefers the term “confabulations” because that’s the correct term in psychology), these problems are often seen as fatal flaws in LLMs. The tendency to produce nonsense that sounds good at the same time discredits chatbots and, it is argued, shows that these models do not really understand what they are saying.

Hinton also has an answer to this: bullshitting is a feature, not a bug. “People are always confabulating,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a feature of human memory.” These models did something with it, says Hinton, in much the same way as humans do. The difference is that people usually confabulate more or less correctly. The invention is not the problem. Computers just need a little more practice.

Also, we currently expect computers to be either right or wrong, and not something in between. “We don’t expect them to babble like humans do,” says Hinton. “When a computer does that, we think it made a mistake.” But with people, you know that this is their way of working. “The problem is that most people have a hopelessly wrong picture of how people actually work.”

Coffee, toast, driving

Of course, brains can still do many things better than computers, at least so far: driving a car, learning to walk, and imagining the future, for example. And all this with a cup of coffee and a piece of toast as a source of energy. “When biological intelligence developed, it didn’t have access to nuclear power plants,” says Hinton. He’s saying how neural networks could be superior to biology for learning once we’re willing to bear the higher cost of processing them. (What we are currently doing, although there are many unanswered questions about the CO₂ footprint, for example.)

But learning is only the first part of Hinton’s argument. The second is communicating. “If you or I learn something and want to pass that knowledge on to someone else, we can’t just send them a copy,” he says. “But I can have 10,000 neural networks, each with their own experiences, and each of them can immediately share what they’ve learned. That’s a huge difference. It’s like there are 10,000 of us, and as soon as there is one person who learns something, everyone knows”.

What does it all amount to? Hinton now believes that there are two types of intelligence in the world: animal brains and neural networks. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.” That is quite a powerful claim. But AI is a field that polarizes. So it would be easy to find people who would laugh at Hinton at such statements – and just as easily others who would nod in agreement.

People are also divided on whether the consequences of this new form of intelligence if it does exist, will be beneficial or apocalyptic. “Whether you believe that such superintelligence will be good or bad depends a lot on whether you’re an optimist or a pessimist,” he says. “If you ask people to rate the risk of something bad happening — like the likelihood of someone in the family getting really sick or being hit by a car — the optimist gives a 5 percent chance and the Pessimist one out of 100.” A mildly depressed person will say the probability is maybe 40 percent. “And he’s usually right about that.”

Bring momentous things to a standstill

And where does Hinton stand? “I’m mildly depressed,” he says. “That’s why I’m scared.” Hinton worries that the new AI tools may be able to find ways to manipulate or even kill people who are unprepared for the technology. “All of a sudden, I changed my mind about whether these things are going to be smarter than us. I think they’re very close now, and they’re going to get a lot smarter in the future,” he says. “How could we survive this?”

He is particularly concerned that man could use the tools he has brought to life through his work to tip momentous things, be it elections or wars. He names politicians like Florida Governor Ron DeSantis or “bad actors” like Vladimir Putin who could use AI to manipulate elections or win wars.

Hinton believes that the next step in intelligent machines is the ability to formulate their own subgoals, which are intermediate steps required to complete a task. What happens, he asks, when this ability is used on something inherently immoral? “Putin would build hyper-intelligent robots with the aim of killing Ukrainians, I don’t doubt that for a second,” he says. “He wouldn’t hesitate. And if you want them to be good at achieving that goal, you don’t want micromanagement. They should figure out how to do it themselves.”

In fact, there are already a handful of experimental projects like BabyAGI or AutoGPT that connect chatbots to other programs like web browsers or word processors, allowing them to string together simple tasks. While these may be tiny steps, they indicate the direction in which some people want to push this technology. “And even if no evil actor takes possession of the machines, there are further concerns about such sub-goals,” says Hinton.

“Redirect all power to my processors”

An example of this would be something that is almost always helpful in biology: getting more energy. “So the first thing that could happen is that a system like this would say, ‘We need more power. Let’s divert all the power to my processors.’ Another big sub-goal then would be to make more copies of yourself. Does that sound good to you?”

Yann LeCun, Meta’s chief AI scientist, agrees with the basic premise but doesn’t share Hinton’s concerns. “There is no question that in the future machines will be smarter than humans – in all areas where humans are smart,” says LeCun. “It’s a question of when and how not if.” But LeCun has a very different opinion on how to proceed now. “I believe that intelligent machines will herald a new renaissance for humanity, a new era of enlightenment,” says the meta-male. He doesn’t think machines will dominate humans just because they’re smarter. “Let alone that they destroy mankind.” Even within the human species, the smartest among us are not the ones who are the most dominant.

Yoshua Bengio, a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms, has an opinion in between. “I hear people downplay fears like this.” In fact, he sees no compelling arguments that there are no risks to the extent that Hinton thinks. But fear is only useful if it inspires action, he says: “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”

One of Hinton’s future priorities is to work with technology industry leaders to see if they can agree on the risks and the actions to be taken. He believes that the international ban on chemical weapons could be a model for curbing the development and use of dangerous AI. “While it’s not foolproof, by and large, humanity doesn’t use chemical weapons,” he says.

Just look up

His Montreal colleague Bengio agrees with Hinton that these issues need to be addressed at the societal level as soon as possible. But he also argues that AI development is progressing faster than societies can keep up. Progress is measured in months, while legislation, regulation, and international treaties take years.

As such, Bengio wonders if the way our societies are currently organized – both nationally and globally – is up to the challenge. “I believe that we should be open to using very different models for the social organization of our planet,” he says.

But does Hinton really think he can get enough people in power to take his concerns seriously? He doesn’t know himself. A few weeks ago he was watching the movie Don’t Look Up, which has an asteroid hurtling towards Earth but people can’t agree on what to do. Eventually, almost everyone dies – an allegory for the world’s failure to combat climate change. “I think it’s the same with artificial intelligence,” he says – and also with other big, unsolvable problems. “The US can’t even agree to keep assault rifles out of the hands of teenagers.”

So Hinton’s view of things is one of disillusionment. One can certainly share his grim assessment of people’s collective inability to act when confronted with serious threats. It’s also true that AI can do real harm – transforming the labor market, perpetuating inequality, exacerbating sexism and racism, and much more. Humanity needs to focus on these issues. But does that also mean that large language models really become our rulers, terminators? Maybe you have to be an optimist not to believe that. 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s