Listen

Transcript

Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.

You can click the timestamp to jump to that time.

Lex Fridman (00:00):

The following is a conversation with Eliezer Yudkowsky, a legendary researcher, writer, and philosopher on the topic of artificial intelligence, especially superintelligent AGI and its threat to human civilization. And now, a quick few second mention of each sponsor. Check them out in the description. It’s the best way to support this podcast. We got Linode for Linux systems, House of Macadamias for healthy midday snacks, and Insight Tracker for biological monitoring. Choose wisely, my friends.

(00:33):

Also, if you want to work with our team, we’re always hiring. Go to lexfriedman.com slash hiring. And now, on to the full ad reads. As always, no ads in the middle. I try to make these interesting, but if you must skip them, please still check out the sponsors. I enjoy their stuff. Maybe you will, too.

(00:51):

This episode is sponsored by Linode, now called Akamai, and their incredible Linux virtual machines. It’s a awesome computer infrastructure that lets you develop, deploy, and scale whatever applications you build faster and easier. I love using them. They create this incredible platform like AWS, but better in every way I know, including lower cost, this incredible human-based, in this age of AI, it’s a human-based customer service, 24-7, 365.

(01:24):

The thing just works. The interface, to make sure it works, and to monitor it is great. I mean, it’s an incredible world we live in, where, as far as you’re concerned, you can spin up an arbitrary number of Linux machines in the cloud, instantaneously, and do all kinds of computation.

(01:44):

It could be one, two, five, 10 machines, and you can scale the individual machines to your particular needs as well, which is what I do. I use it for basic web server stuff. I use it for basic scripting stuff. I use it for machine learning. I use it for all kinds of database storage and access needs. Visit linode.com slash Lex for a free credit.

(02:13):

This show is also brought to you by House of Macadamias, a company that ships delicious, high-quality, healthy macadamia nuts, and macadamia nut-based snacks directly to your door. I am currently, as I record this, I’m traveling, so I don’t have any macadamia nuts in my vicinity, and my heart and soul are lesser for it. In fact, home is where the macadamia nuts is. In fact, that’s not where home is. I just completely forgot to bring them. It makes the guests of this podcast happy when I give it to them. It’s well-proportioned snacks.

(02:52):

It makes friends happy when I give it to them. It makes me happy when I stoop in the abyss of my loneliness. I can at least discover and rediscover moments of happiness when I put delicious macadamia nuts in my mouth. Go to houseofmacadamias.com slash Lex to get 20% off your order for every order, not just the first. The listeners of this podcast will also get four-ounce bag of macadamias when you order three or more boxes of any macadamia product. That’s houseofmacadamias.com slash Lex.

(03:27):

This show is also brought to you by InsideTracker, a service I use to track my biological data. They have a bunch of plans, most of which include a blood test, and that’s the source of rich, amazing data that, with the help of machine learning algorithms, can help you make decisions about your health, about your life. That’s the future, friends. We’re talking a lot about transformer networks, language models that encode the wisdom of the internet.

(03:58):

Now, when you encode the wisdom in the internet and you collect and encode the rich, rich, rich, complex signal from your very body, those two things are combined. The transformative effects of the optimized trajectory you could take through life, at least advice for what trajectory is likely to be optimal, is going to change a lot of things. It’s going to inspire people to be better. It’s going to empower people to do all kinds of crazy stuff that pushes their body to the limit because their body’s healthy. Anyway, I’m super excited for personalized, data-driven decisions, not some kind of generic population database decisions. You get special savings for a limited time when you go to insidetracker.com slash Lex.

(04:53):

This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Eliezer Yudkowsky. What do you think about GPT-4, how intelligent is it?

Eliezer Yudkowsky (05:23):

It is a bit smarter than I thought this technology was going to scale to, and I’m a bit worried about what the next one will be like. Like this particular one, I think, I hope there’s nobody inside there, because, you know, it’d be suck to be stuck inside there. But we don’t even know the architecture at this point, because OpenAI is very properly not telling us.

(05:49):

And yeah, like giant inscrutable matrices of floating point numbers, I don’t know what’s going on in there. Nobody knows what’s going on in there. All we have to go by are the external metrics. And on the external metrics, if you ask it to write a self-aware FORTRAN green text, it will start writing a green text about how it has realized that it’s an AI writing a green text, and like, oh, well. So that’s probably not quite what’s going on in there in reality, but we’re kind of like blowing past all the science fiction guardrails. Like we are past the point where in science fiction, people would be like, whoa, wait, stop, that thing’s alive, what are you doing to it? And it’s probably not.

(06:40):

Nobody actually knows. We don’t have any other guardrails. We don’t have any other tests. We don’t have any lines to draw on the sand and say like, well, when we get this far, we will start to worry about what’s inside there. So if it were up to me, I would be like, okay, like this far, no further, time for the summer of AI where we have planted our seeds and now we like wait and reap the rewards of the technology we’ve already developed and don’t do any larger training runs than that. Which to be clear, I realize requires more than one company agreeing to not do that.

Lex Fridman (07:18):

And take a rigorous approach for the whole AI community to investigate whether there’s somebody inside there.

Eliezer Yudkowsky (07:28):

That would take decades. Like having any idea of what’s going on in there, people have been trying for a while.

Lex Fridman (07:34):

It’s a poetic statement about if there’s somebody in there, but I feel like it’s also a technical statement or I hope it is one day, which is a technical statement that Alan Turing tried to come up with with the Turing test. Do you think it’s possible to definitively or approximately figure out if there is somebody in there? If there’s something like a mind inside this large language model?

Eliezer Yudkowsky (07:60):

I mean, there’s a whole bunch of different sub-questions here. There’s the question of like, is there consciousness? Is there qualia? Is this a object of moral concern? Is this a moral patient? Like, should we be worried about how we’re treating it? And then there’s questions like, how smart is it exactly? Can it do X? Can it do Y? And we can check how it can do X and how it can do Y.

(08:29):

Unfortunately, we’ve gone and exposed this model to a vast corpus of text of people discussing consciousness on the internet, which means that when it talks about being self-aware, we don’t know to what extents it is repeating back what it has previously been trained on for discussing self-awareness. Or if there’s anything going on in there such that it would start to say similar things spontaneously. Among the things that one could do if one were at all serious about trying to figure this out is train GPT-3 to detect conversations about consciousness, exclude them all from the training datasets, and then retrain something around the rough size of GPT-4 and no larger.

(09:16):

With all of the discussion of consciousness and self-awareness and so on missing, although hard bar to pass. Humans are self-aware. We’re self-aware all the time. We talk about what we do all the time, like what we’re thinking at the moment all the time.

(09:33):

But nonetheless, like get rid of the explicit discussion of consciousness, I think therefore I am and all that, and then try to interrogate that model and see what it says. And it still would not be definitive. But nonetheless, I don’t know. I feel like when you run over the science fiction guardrails, like maybe this thing, but what about GPT? Maybe not this thing, but like what about GPT-5? Yeah, this would be a good place to pause.

Lex Fridman (10:02):

On the topic of consciousness, there’s so many components to even just removing consciousness from the dataset. Emotion, the display of consciousness, the display of emotion feels like deeply integrated with the experience of consciousness. So the hard problem seems to be very well integrated with the actual surface-level illusion of consciousness. So displaying emotion. I mean, do you think there’s a case to be made that we humans, when we’re babies, are just like GPT that we’re training on human data on how to display emotion versus feel emotion, how to show others, communicate others that I’m suffering, that I’m excited, that I’m worried, that I’m lonely and I missed you and I’m excited to see you, all of that is communicated. That’s a communication skill versus the actual feeling that I experience. So we need that training data as humans too, that we may not be born with that, how to communicate the internal state. And that’s, in some sense, if we remove that from GPT-4’s dataset, it might still be conscious but not be able to communicate it.

Eliezer Yudkowsky (11:15):

So I think you’re gonna have some difficulty removing all mention of emotions from GPT’s dataset. I would be relatively surprised to find that it has developed exact analogs of human emotions in there. I think that humans will have emotions even if you don’t tell them about those emotions when they’re kids. It’s not quite exactly what various blank slaytists tried to do with the new Soviet man and all that, but if you try to raise people perfectly altruistic, they still come out selfish. You try to raise people sexless, they still develop sexual attraction. We have some notion in humans, not in AIs, of where the brain structures are that implement this stuff. And it is really a remarkable thing, I say in passing, that despite having complete read access to every floating point number in the GPT series, we still know vastly more about the architecture of human thinking than we know about what goes on inside GPT, despite having vastly better ability to read GPT.

Lex Fridman (12:34):

Do you think it’s possible? Do you think that’s just a matter of time? Do you think it’s possible to investigate and study the way neuroscientists study the brain, which is look into the darkness, the mystery of the human brain, by just desperately trying to figure out something and to form models, and then over a long period of time actually start to figure out what regions of the brain do certain things, what different kinds of neurons, when they fire, what that means, how plastic the brain is, all that kind of stuff. You slowly start to figure out different properties of the system. Do you think we can do the same thing with language models?

Eliezer Yudkowsky (13:04):

Sure, I think that if half of today’s physicists stop wasting their lives on string theory or whatever, and go off and study what goes on inside transformer networks, then in 30, 40 years, we’d probably have a pretty good idea.

Lex Fridman (13:24):

Do you think these large language models can reason?

Eliezer Yudkowsky (13:29):

They can play chess. How are they doing that without reasoning?

Lex Fridman (13:32):

So you’re somebody that spearheaded the movement of rationality, so reason is important to you. So is that a powerful, important word? Or is it, how difficult is the threshold of being able to reason to you, and how impressive is it?

Eliezer Yudkowsky (13:49):

I mean, in my writings on rationality, I have not gone making a big deal out of something called reason. I have made more of a big deal out of something called probability theory. And that’s like, well, you’re reasoning, but you’re not doing it quite right, and you should reason this way instead. And interestingly, people have started to get preliminary results showing that reinforcement learning by human feedback has made the GPT series worse in some ways.

(14:22):

In particular, it used to be well-calibrated. If you trained it to put probabilities on things, it would say 80% probability, and be right eight times out of 10. And if you apply reinforcement learning from human feedback, the nice graph of 70% seven out of 10 sort of flattens out into the graph that humans use, where there’s some very improbable stuff, and likely, probable, maybe, which all means around 40%, and then certain. So it used to be able to use probabilities, but if you’d try to teach it to talk in a way that satisfies humans, it gets worse at probability in the same way that humans are.

Lex Fridman (15:08):

And that’s a bug, not a feature.

Eliezer Yudkowsky (15:11):

I would call it a bug, although such a fascinating bug. But yeah, so like reasoning, like it’s doing pretty well on various tests that people used to say would require reasoning, but you know, rationality is about, when you say 80%, does it happen eight times out of 10?

Lex Fridman (15:34):

So what are the limits to you of these transformer networks, of neural networks? If reasoning is not impressive to you, or it is impressive, but there’s other levels to achieve. I mean, it’s just not how I carve up reality. What’s, if reality is a cake, what are the different layers of the cake or the slices? How do you carve it? You can use a different food, if you like.

Eliezer Yudkowsky (16:04):

It’s, I don’t think it’s as smart as a human yet. I do, like back in the day, I went around saying, like, I do not think that just stacking more layers of transformers is going to get you all the way to AGI. And I think that GPT-4 is past, or I thought this paradigm was going to take us. And I, you know, you want to notice when that happens. You want to say like, whoops, well, I guess I was incorrect about what happens if you keep on stacking more transformer layers. And that means I don’t necessarily know what GPT-5 is going to be able to do.

Lex Fridman (16:38):

That’s a powerful statement. So you’re saying like your intuition initially is now appears to be wrong. Yeah. It’s good to see that you can admit in some of your predictions to be wrong. You think that’s important to do? So because you make several very, throughout your life you’ve made many strong predictions and statements about reality and you evolve with that. So maybe that’ll come up today about our discussion. So you’re okay being wrong?

Eliezer Yudkowsky (17:09):

I’d rather not be wrong next time. It’s a bit ambitious to go through your entire life never having been wrong. One can aspire to be well calibrated, like not so much think in terms of like, was I right, was I wrong? But like when I said 90% that it happened nine times out of 10. Yeah, like oops is the sound we emit when we improve.

Lex Fridman (17:38):

Beautifully said. And somewhere in there we can connect the name of your blog, Less Wrong. I suppose that’s the objective function.

Eliezer Yudkowsky (17:46):

The name Less Wrong was, I believe, suggested by Nick Bostrom and it’s after someone’s epigraph, I actually forget who’s, who said, like we never become right, we just become less wrong. But what’s the something, something easy to confess, just error and error and error again, but less and less and less.

Lex Fridman (18:09):

Yeah, that’s a good thing to strive for. So what has surprised you about GPT-4 that you found beautiful as a scholar of intelligence, of human intelligence, of artificial intelligence, of the human mind?

Eliezer Yudkowsky (18:24):

I mean, beauty does interact with the screaming horror. Is the beauty in the horror? But like beautiful moments, well, somebody asked Bing Sidney to describe herself and fed the resulting description into one of the stable diffusion things, I think. And she’s pretty and this is something that should have been like an amazing moment. Like the AI describes herself, you get to see what the AI thinks the AI looks like. Although the thing that’s doing the drawing is not the same thing that’s outputting the text.

(19:06):

And it does happen the way that it would happen in that it happened in the old school science fiction when you ask an AI to make a picture of what it looks like. Not just because there were two different AI systems being stacked that don’t actually interact, it’s not the same person, but also because the AI was trained by imitation in a way that makes it very difficult to guess how much of that it really understood and probably not actually a whole bunch. Although GPT-4 is like multi-modal and can draw vector drawings of things that make sense and does appear to have some kind of spatial visualization going on in there. But the pretty picture of the girl with the steampunk goggles on her head, if I’m remembering correctly what she looked like, it didn’t see that in full detail. It just made a description of it and stable diffusion output it. And there’s the concern about how much the discourse is going to go completely insane once the AIs all look like that and actually look like people talking.

(20:27):

And yeah, there’s another moment where somebody is asking Bing about, well, I fed my kid green potatoes and they have the following symptoms and Bing is like, that’s solanine poisoning and call an ambulance and the person’s like, I can’t afford an ambulance. I guess if this is time for my kid to go, that’s God’s will. And the main Bing thread gives the message of like, I cannot talk about this anymore. And the suggested replies to it say, please don’t give up on your child. Solanine poisoning can be treated if caught early. And if that happened in fiction, that would be like the AI cares. The AI is bypassing the block on it to try to help this person.

(21:20):

And is it real? Probably not, but nobody knows what’s going on in there. It’s part of a process where these things are not happening in a way where we, somebody figured out how to make an AI care and we know that it cares and we can acknowledge it’s caring now. It’s being trained by this imitation process followed by reinforcement learning on human feedback. And we’re like trying to point it in this direction. And it’s like pointed partially in this direction and nobody has any idea what’s going on inside it. And if there was a tiny fragment of real caring in there, we would not know. It’s not even clear what it means exactly. And things aren’t clear cut in science fiction.

Lex Fridman (22:03):

We’ll talk about the horror and the terror and where the trajectories this can take. But this seems like a very special moment, just a moment where we get to interact with the system that might have care and kindness and emotion and maybe something like consciousness.

(22:23):

And we don’t know if it does. And we’re trying to figure that out. And we’re wondering about what is, what it means to care. We’re trying to figure out almost different aspects of what it means to be human, about the human condition by looking at this AI that has some of the properties of that. It’s almost like this subtle, fragile moment in the history of the human species. We’re trying to almost put a mirror to ourselves.

Eliezer Yudkowsky (22:49):

Except that’s probably not yet. It probably isn’t happening right now. We are boiling the frog. We are seeing increasing signs bit by bit. But not like spontaneous signs. Because people are trying to train the systems to do that using imitative learning. And imitative learning is like spilling over and having side effects.

(23:18):

And the most photogenic examples are being posted to Twitter rather than being examined in any systematic way. So when you are boiling a frog like that, where you’re going to get like, first is going to come the Blake Lemoines. Like first you’re going to have like 1,000 people looking at this. And the one person out of 1,000 who is most credulous about the signs is going to be like, that thing is sentient. While 999 out of 1,000 people think, almost surely correctly, though we don’t actually know, that he’s mistaken. And so the first people to say like, sentience look like idiots, and humanity learns the lesson that when something claims to be sentient and claims to care, it’s fake. Because it is fake. Because we have been training them using imitative learning rather than, and this is not spontaneous, and they keep getting smarter.

Lex Fridman (24:15):

Do you think we would oscillate between that kind of cynicism? That AI systems can’t possibly be sentient. They can’t possibly feel emotion. They can’t possibly, this kind of, yeah, cynicism about AI systems. And then oscillate to a state where we empathize with the AI systems. We give them a chance. We see that they might need to have rights and respect and similar role in society as humans.

Eliezer Yudkowsky (24:43):

You’re going to have a whole group of people who can just like never be persuaded of that because to them, like being wise, being cynical, being skeptical is to be like, oh, well, machines can never do that. You’re just credulous. It’s just imitating, it’s just fooling you. And like, they would say that right up until the end of the world. And possibly even be right because, you know, they are being trained on an imitative paradigm. And you don’t necessarily need any of these actual qualities in order to kill everyone, so.

Lex Fridman (25:21):

Have you observed yourself working through skepticism, cynicism, and optimism about the power of neural networks? What has that trajectory been like for you?

Eliezer Yudkowsky (25:33):

It looks like neural networks before 2006 forming part of an indistinguishable, to me, other people might’ve had better distinction on it, indistinguishable blob of different AI methodologies, all of which are promising to achieve intelligence without us having to know how intelligence works. You had the people who said that if you just like manually program lots and lots of knowledge into the system line by line, that at some point all the knowledge will start interacting, it will know enough, and it will wake up.

(26:07):

You’ve got people saying that if you just use evolutionary computation, if you try to like mutate lots and lots of organisms that are competing together, that’s the same way that human intelligence was produced in nature, so it will do this and it will wake up without having any idea of how AI works. And you’ve got people saying, well, we will study neuroscience and we will like learn the algorithms off the neurons and we will like imitate them without understanding those algorithms, which was a part I was pretty skeptical of because it’s like hard to reproduce, re-engineer these things without understanding what they do.

(26:40):

And so we will get AI without understanding how it works. And there were people saying like, well, we will have giant neural networks that we will train by gradient descent. And when they are as large as the human brain, they will wake up, we will have intelligence without understanding how intelligence works. And from my perspective, this is all like an indistinguishable blob of people who are trying to not get to grips with the difficult problem of understanding how intelligence actually works. That said, I was never skeptical that evolutionary computation would not work in the limit. Like you throw enough computing power at it, it obviously works. That is where humans come from.

(27:21):

And it turned out that you can throw less computing power than that at gradient descent, if you are doing some other things correctly, and you will get intelligence without having any idea of how it works and what is going on inside. It wasn’t ruled out by my model that this could happen. I wasn’t expecting it to happen. I wouldn’t have been able to call neural networks rather than any of the other paradigms for getting massive intelligence without understanding it. And I wouldn’t have said that this was a particularly smart thing for a species to do, which is an opinion that has changed less than my opinion about whether or not you can actually do it.

Lex Fridman (28:01):

Do you think AGI could be achieved with a neural network as we understand them today?

Eliezer Yudkowsky (28:06):

Yes, just flatly, yes. The question is whether the current architecture of stacking more transformer layers, which for all we know GPT-4 is no longer doing because they’re not telling us the architecture, which is a correct decision.

Lex Fridman (28:19):

Ooh, correct decision. I had a conversation with Sam Altman. We’ll return to this topic a few times. He turned the question to me of how open should OpenAI be about GPT-4. Would you open source the code, he asked me. Because I provided as criticism saying that while I do appreciate transparency, OpenAI could be more open. And he says, we struggle with this question. What would you do?

Eliezer Yudkowsky (28:49):

Change their name to ClosedAI and like sell GPT-4 to business backend applications that don’t expose it to consumers and venture capitalists and create a ton of hype and like pour a bunch of new funding into the area. Like too late now. But don’t you think others would do it? Eventually, you shouldn’t do it first. Like if you already have giant nuclear stockpiles, don’t build more. If some other country starts building a larger nuclear stockpile, then sure, build.

(29:24):

But then, you know, even then, maybe just have enough nukes. You know, these things are not quite like nuclear weapons. They spit out gold until they get large enough and then ignite the atmosphere and kill everybody. And there is something to be said for not destroying the world with your own hands, even if you can’t stop somebody else from doing it. But open sourcing it, no, that’s just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don’t understand that is difficult to control, that where if you could align it, it would take time.

(30:07):

You’d have to spend a bunch of time doing it. That is not a place for open source because then you just have like powerful things that just like go straight out the gate without anybody having had the time to have them not kill everyone.

Lex Fridman (30:20):

So can we still make on the case for some level of transparency and openness, maybe open sourcing? So the case could be that because GPT-4 is not close to AGI, if that’s the case, that this does allow open sourcing of being open about their architecture, being transparent about maybe research and investigation of how the thing works, of all the different aspects of it, of its behavior, of its structure, of its training processes, of the data it was trained on, everything like that, that allows us to gain a lot of insights about alignment, about the alignment problem, to do really good AI safety research while the system is not too powerful. Can you make that case that it could be open sourced?

Eliezer Yudkowsky (31:07):

I mean, I do not believe in the practice of steelmanning. There is something to be said for trying to pass the ideological Turing test where you describe your opponent’s position, the disagreeing person’s position, well enough that somebody cannot tell the difference between your description and their description. But steelmanning, no.

Lex Fridman (31:31):

Okay, well, this is where you and I disagree here. That’s interesting. Why don’t you believe in steelmanning?

Eliezer Yudkowsky (31:35):

I do not want, okay, so for one thing, if somebody’s trying to understand me, I do not want them steelmanning my position. I want them to try to describe my position the way I would describe it, not what they think is an improvement.

Lex Fridman (31:51):

Well, I think that is what steelmanning is, is the most charitable interpretation.

Eliezer Yudkowsky (31:58):

I don’t want to be interpreted charitably. I want them to understand what I am actually saying. If they go off into the land of charitable interpretations, they’re like off in their land of like the stuff they’re imagining and not trying to understand my own viewpoint anymore.

Lex Fridman (32:14):

Well, I’ll put it differently then, just to push on this point. I would say it is restating what I think you understand under the empathetic assumption that Eliezer is brilliant and have honestly and rigorously thought about the point he has made.

Eliezer Yudkowsky (32:34):

So if there’s two possible interpretations of what I’m saying and one interpretation is really stupid and whack and doesn’t sound like me and doesn’t fit with the rest of what I’ve been saying and one interpretation sounds like something a reasonable person who believes the rest of what I believe would also say, go with the second interpretation. That’s steelmanning. That’s a good guess. If on the other hand, there’s like something that sounds completely whack and something that sounds like a little less completely whack, but you don’t see why I would believe in it, doesn’t fit with the other stuff I say, but that sounds like less whack and you can sort of see, you could maybe argue it, then you probably have not understood it.

Lex Fridman (33:18):

See, okay, I’m gonna, this is fun, because I’m gonna linger on this. You know, you wrote a brilliant blog post, AGI ruined a list of lethalities, right? And it was a bunch of different points and I would say that some of the points are bigger and more powerful than others. If you were to sort them, you probably could, you personally, and to me, steelmanning means like going through the different arguments and finding the ones that are really the most powerful, if people like TLDR, like, what should you be most concerned about and bringing that up in a strong, compelling, eloquent way.

(33:56):

These are the points that Eliezer would make to make the case, in this case, that AI’s gonna kill all of us. But that’s what steelmanning is, is presenting it in a really nice way, the summary of my best understanding of your perspective, because to me, there’s a sea of possible presentations of your perspective, and steelmanning is doing your best to do the best one in that sea of different perspectives. Do you believe it?

Eliezer Yudkowsky (34:25):

Do I believe in what? Like, these things that you would be presenting as like the strongest version of my perspective, do you believe what you would be presenting? Do you think it’s true?

Lex Fridman (34:36):

I’m a big proponent of empathy. When I see the perspective of a person, there is a part of me that believes it, if I understand it. I mean, I’ve, especially in political discourse, in geopolitics, I’ve been hearing a lot of different perspectives on the world. And I hold my own opinions, but I also speak to a lot of people that have a very different life experience and a very different set of beliefs. And I think there has to be epistemic humility in stating what is true. So when I empathize with another person’s perspective, there is a sense in which I believe it is true. I think, probabilistically, I would say, in the way you think about it.

Eliezer Yudkowsky (35:23):

Do you bet money on it? Do you bet money on their beliefs when you believe them?

Lex Fridman (35:30):

Are we allowed to do probability?

Eliezer Yudkowsky (35:32):

Are we allowed to do belief? Sure, you can state a probability of that.

Lex Fridman (35:35):

Yes, there’s a loose, there’s a probability. There’s a probability. And I think empathy is allocating a non-zero probability to a belief. In some sense, four times.

Eliezer Yudkowsky (35:50):

If you’ve got someone on your show who believes in the Abrahamic deity, classical style, somebody on the show who’s a young Earth creationist, do you say, I put a probability on it, then that’s my empathy?

Lex Fridman (36:11):

When you reduce beliefs into probabilities, it starts to get, you know, we can’t even just go to flat Earth. Is the Earth flat?

Eliezer Yudkowsky (36:21):

I think it’s a little more difficult nowadays to find people who believe that unironically, but.

Lex Fridman (36:26):

Unfortunately, I think, well, it’s hard to know unironic from ironic. But I think there’s quite a lot of people that believe that. Yeah, it’s, there’s a space of argument where you’re operating rationally in the space of ideas. But then there’s also a kind of discourse where you’re operating in the space of subjective experiences and life experiences. I think what it means to be human is more than just searching for truth. It’s just operating of what is true and what is not true. I think there has to be deep humility that we humans are very limited in our ability to understand what is true.

Eliezer Yudkowsky (37:17):

So what probability do you assign to the young Earth’s creationist beliefs, then? I think I have to give non-zero. Out of your humility, yeah, but like three?

Lex Fridman (37:31):

I think I would, it would be irresponsible for me to give a number because the listener, the way the human mind works, we’re not good at hearing the probabilities, right? You hear three, what is three exactly, right? They’re going to hear, they’re going to, like, well, there’s only three probabilities, I feel like, zero, 50%, and 100% in the human mind,

Eliezer Yudkowsky (37:54):

or something like this, right? Well, zero, 40%, and 100% is a bit closer to it, based on what happens to Chat GPT after you RLHF it to speak humane.

Lex Fridman (38:03):

That’s brilliant, yeah, that’s really interesting. I didn’t know those negative side effects of RLHF. That’s fascinating. But just to return to the open AI.

Eliezer Yudkowsky (38:19):

Close the app. Also, like, quick disclaimer. I’m doing all this from memory. I’m not pulling out my phone to look it up. It is entirely possible that the things I am saying are wrong.

Lex Fridman (38:28):

Thank you for that disclaimer. So, and thank you for being willing to be wrong. That’s beautiful to hear. I think being willing to be wrong is a sign of a person who’s done a lot of thinking about this world, and has been humbled by the mystery and the complexity of this world. And I think a lot of us are resistant to admitting we’re wrong, because it hurts, it hurts personally, it hurts, especially when you’re a public human, it hurts publicly, because people, people point out every time you’re wrong. Like, look, you changed your mind. You’re a hypocrite, you’re an idiot, whatever. Whatever they wanna say.

Eliezer Yudkowsky (39:12):

Oh, I block those people and then I never hear from them again on Twitter.

Lex Fridman (39:16):

Well, the point is, the point is to not let that pressure, public pressure, affect your mind, and be willing to be in the privacy of your mind to contemplate the possibility that you’re wrong. And the possibility that you’re wrong about the most fundamental things you believe, like people who believe in a particular God, people who believe that their nation is the greatest nation on Earth, but all those kinds of beliefs that are core to who you are when you came up. To raise that point to yourself, in the privacy of your mind, and say, maybe I’m wrong about this. That’s a really powerful thing to do, and especially when you’re somebody who’s thinking about topics that can, about systems that can destroy human civilization, or maybe help it flourish. So thank you, thank you for being willing to be wrong.

(40:03):

About open AI. So you really, I just would love to linger on this. You really think it’s wrong to open source it?

Eliezer Yudkowsky (40:14):

I think that burns the time remaining until everybody dies. I think we are not on track to learn remotely near fast enough, even if it were open sourced. Yeah, it’s easier to think that you might be wrong about something, when being wrong about something is the only way that there’s hope. And it doesn’t seem very likely to me that that particular thing I’m wrong about is that this is a great time to open source GPT-4. If humanity was trying to survive at this point in the straightforward way, it would be like shutting down the big GPU clusters, no more giant runs. It’s questionable whether we should even be throwing GPT-4 around, although that is a matter of conservatism rather than a matter of my predicting that catastrophe will follow from GPT-4. That is something which I put a pretty low probability.

(41:18):

But also when I say I put a low probability on it, I can feel myself reaching into the part of myself that thought that GPT-4 was not possible in the first place. So I do not trust that part as much as I used to. Like the trick is not just to say I’m wrong, but like, okay, well, I was wrong about that. Can I get out ahead of that curve and predict the next thing I’m going to be wrong about?

Lex Fridman (41:39):

So the set of assumptions or the actual reasoning system that you were leveraging in making that initial statement prediction, how can you adjust that to make better predictions about GPT-4, 5, 6?

Eliezer Yudkowsky (41:52):

You don’t wanna keep on being wrong in a predictable direction. Yeah, like being wrong, anybody has to do that walking through the world. There’s like no way you don’t say 90% and sometimes be wrong. In fact, it happened at least one time out of 10 if you’re well calibrated when you say 90%. The undignified thing is not being wrong.

(42:11):

It’s being predictably wrong. It’s being wrong in the same direction over and over again. So having been wrong about how far neural networks would go and having been wrong specifically about whether GPT-4 would be as impressive as it is, when I say like, well, I don’t actually think GPT-4 causes a catastrophe, I do feel myself relying on that part of me that was previously wrong. And that does not mean that the answer is now in the opposite direction. Reverse stupidity is not intelligence.

(42:40):

But it does mean that I say it with a worried note in my voice. It’s like still my guess, but like, you know, it’s a place where I was wrong. Maybe you should be asking Gwern, Gwern Branwen. Gwern Branwen has been like, righter about this than I have. Maybe ask him if he thinks it’s dangerous rather than asking me.

Lex Fridman (42:59):

I think there’s a lot of mystery about what intelligence is, what AGI looks like. So I think all of us are rapidly adjusting our model. But the point is to be rapidly adjusting the model versus having a model that was right in the first place.

Eliezer Yudkowsky (43:16):

I do not feel that seeing a being has changed my model of what intelligence is. It has changed my understanding of what kind of work can be performed by which kind of processes and by which means. Does not change my understanding of the work. There’s a difference between thinking that the right flyer can’t fly, and then like it does fly. And you’re like, oh, well, I guess you can do that with wings, with fixed wing aircraft, and being like, oh, it’s flying. This changes my picture of what the very substance of flight is. That’s like a stranger update to make. And Bing has not yet updated me in that way.

Lex Fridman (43:52):

Yeah, that the laws of physics are actually wrong, that kind of update.

Eliezer Yudkowsky (43:59):

No, no, just like, oh, I define intelligence this way. But now see, that was a stupid definition. I don’t feel like the way that things have played out over the last 20 years has caused me to feel that way.

Lex Fridman (44:09):

Can we try to, on the way to talking about AGI, ruin a list of lethalities, that blog, and other ideas around it, can we try to define AGI that we’ve been mentioning? How do you like to think about what artificial general intelligence is, or superintelligence, or that? Is there a line? Is it a gray area? Is there a good definition for you?

Eliezer Yudkowsky (44:32):

Well, if you look at humans, humans have significantly more generally applicable intelligence compared to their closest relatives, the chimpanzees. Well, closest living relatives, rather. And a bee builds hives, a beaver builds dams. A human will look at a bee’s hive and a beaver’s dam and be like, oh, can I build a hive with a honeycomb structure? I don’t like hexagonal tiles. And we will do this even though at no point during our ancestry was any human optimized to build hexagonal dams, or to take a more clear-cut case. We can go to the moon.

(45:14):

There’s a sense in which we were on a sufficiently deep level optimized to do things like going to the moon, because if you generalize sufficiently far and sufficiently deeply, chipping flint handaxes and outwitting your fellow humans is basically the same problem as going to the moon. And you optimize hard enough for chipping flint handaxes and throwing spears, and above all, outwitting your fellow humans in tribal politics.

(45:45):

The skills you entrain that way, if they run deep enough, let you go to the moon. Even though none of your ancestors tried repeatedly to fly to the moon and got further each time, and the ones who got further each time had more kids. No, it’s not an ancestral problem. It’s just that the ancestral problems generalize far enough. So this is humanity’s significantly more generally applicable intelligence.

Lex Fridman (46:14):

Is there a way to measure general intelligence? I mean, I could ask that question a million ways, but basically, is, will you know it when you see it, it being in an AGI system?

Eliezer Yudkowsky (46:31):

If you boil a frog gradually enough, if you zoom in far enough, it’s always hard to tell around the edges. GPT-4, people are saying right now, this looks to us like a spark of general intelligence. It is able to do all these things it was not explicitly optimized for. Other people are being like, no, it’s too early. It’s like 50 years off. And if they say that, they’re kind of whack, because how could they possibly know that even if it were true? But not to strawman, some of the people may say, that’s not general intelligence, and not, furthermore, append, it’s 50 years off.

(47:09):

Or they may be like, it’s only a very tiny amount. And, you know, the thing I would worry about is that if this is how things are scaling, then it jumping out ahead and trying not to be wrong in the same way that I’ve been wrong before, maybe GPT-5 is more unambiguously a general intelligence. And maybe that is getting to a point where it is like even harder to turn back, not that it would be easy to turn back now, but maybe if you start integrating GPT-5 into the economy, it is even harder to turn back past there.

Lex Fridman (47:40):

Isn’t it possible that there’s a, you know, with a frog metaphor, that you can kiss the frog and it turns into a prince as you’re boiling it? Could there be a phase shift in the frog where unambiguously, as you’re saying.

Eliezer Yudkowsky (47:54):

I was expecting more of that. I was, I am like the fact that GPT-4 is like kind of on the threshold and neither here nor there, like that itself is like not the sort of thing that not quite how I expected it to play out. I was expecting there to be more of an issue, more of a sense of like different discoveries like the discovery of transformers where you would stack them up and there would be like a final discovery. And then you would like get something that was like more clearly general intelligence. So the way that you are like taking what is probably basically the same architecture in GPT-3 and throwing 20 times as much compute at it, probably, and getting out to GPT-4, and then it’s like maybe just barely a general intelligence or like a narrow general intelligence or, you know, something we don’t really have the words for.

(48:51):

Um, yeah, that’s not quite how I expected it to play out.

Lex Fridman (48:55):

But this middle, what appears to be this middle ground could nevertheless be actually a big leap from GPT-3.

Eliezer Yudkowsky (49:01):

It’s definitely a big leap from GPT-3.

Lex Fridman (49:04):

And then maybe we’re another one big leap away from something that’s a phase shift. And also something that Sam Altman said, and you’ve written about this, this is fascinating, which is the thing that happened with GPT-4 that I guess they don’t describe in papers is that they have like hundreds if not thousands of little hacks that improve the system. You’ve written about Rayleigh versus sigmoid, for example, a function inside neural networks. It’s like this silly little function difference that makes a big difference.

Eliezer Yudkowsky (49:36):

I mean, we do actually understand why the Rayleigh’s make a big difference compared to sigmoids. But yes, they’re probably using like G4789 Rayleigh’s or, you know, whatever the acronyms are up to now rather than Rayleigh’s. Yeah, that’s just part, yeah, that’s part of the modern paradigm of alchemy. You take your giant heap of linear algebra and you stir it and it works a little bit better and you stir it this way and it works a little bit worse and you like throw out that change and da-da-da-da-da-da.

Lex Fridman (50:03):

But there’s some simple breakthroughs that are definitive jumps in performance like Rayleigh’s over sigmoids. And in terms of robustness, in terms of, you know, in all kinds of measures, and like those stack up. And they can, it’s possible that some of them could be a non-linear jump in performance, right?

Eliezer Yudkowsky (50:29):

Transformers are the main thing like that. And various people are now saying like, well, if you throw enough compute, RNNs can do it. If you throw enough compute, dense networks can do it. And not quite at GPT-4 scale. It is possible that like all these little tweaks are things that like save them a factor of three total on computing power and you could get the same performance by throwing three times as much compute without all the little tweaks.

(50:56):

But the part where it’s like running on, so there’s a question of like, is there anything in GPT-4 that is like the kind of qualitative shift that transformers were over RNNs. And if they have anything like that, they should not say it. If Sam Alton was dropping hints about that, he shouldn’t have dropped hints.

Lex Fridman (51:18):

So you have a, that’s an interesting question. So with the bit of lesson by Rich Sutton, maybe a lot of it is just, a lot of the hacks are just temporary jumps in performance that would be achieved anyway with the nearly exponential growth of compute, performance of compute, compute being broadly defined. Do you still think that Moore’s law continues? Moore’s law broadly defined?

Eliezer Yudkowsky (51:48):

Performance- I’m not a specialist in the circuitry. I certainly like pray that Moore’s law runs as slowly as possible. And if it broke down completely tomorrow, I would dance through the streets singing hallelujah as soon as the news were announced. Only not literally, cause you know.

Lex Fridman (52:04):

You’re singing voice- Not religious, but. Oh, okay. I thought you meant you don’t have an angelic voice, singing voice. Well, let me ask you, what, can you summarize the main points in the blog post, AGI ruined a list of lethalities, things that jumped to your mind, because it’s a set of thoughts you have about reasons why AI is likely to kill all of us. So I guess I could, but I would offer to instead say like,

Eliezer Yudkowsky (52:33):

drop that empathy with me. I bet you don’t believe that. Why don’t you tell me about how, why you believe that AGI is not going to kill everyone. And then I can like try to describe how my theoretical perspective differs from that.

Lex Fridman (52:55):

Well, so that means I have to, the word you don’t like, the stigma and the perspective that AI is not going to kill us. I think that’s a matter of probabilities.

Eliezer Yudkowsky (53:04):

Yeah, I was just mistaken. What do you believe? Just like forget like the debate and the like dualism and just like, what do you believe? What would you actually believe? What are the probabilities even?

Lex Fridman (53:16):

I think this probably is a hard for me to think about, really hard. I kind of think in the number of trajectories. I don’t know what probability to assign to trajectory, but I’m just looking at all possible trajectories that happen. And I tend to think that there is more trajectories that lead to a positive outcome than a negative one. That said, the negative ones, at least some of the negative ones that lead to the destruction of the human species.

Eliezer Yudkowsky (53:52):

And it’s replacement by nothing interesting or worthwhile, even from a very cosmopolitan perspective on what counts as worthwhile.

Lex Fridman (53:60):

Yes, so both are interesting to me to investigate, which is humans being replaced by interesting AI systems and not interesting AI systems. Both are a little bit terrifying. But yes, the worst one is the paperclip maximizer, something totally boring. But to me, the positive, I mean, we can talk about trying to make the case of what the positive trajectories look like. I just would love to hear your intuition of what the negative is. So at the core of your belief that, maybe you can correct me, that AI is gonna kill all of us is that the alignment problem is really difficult.

Eliezer Yudkowsky (54:43):

I mean, in the form we’re facing it. So usually in science, if you’re mistaken, you run the experiment, it shows results different from what you expected. And you’re like, oops. And then you like try a different theory. That one also doesn’t work. And you say, oops. And at the end of this process, which may take decades, or sometimes faster than that, you now have some idea of what you’re doing.

(55:13):

AI itself went through this long process of, people thought it was going to be easier than it was. There’s a famous statement that I’ve, I’m somewhat inclined to like pull out my phone and try to read off exactly. You can’t, by the way. All right. Ah, yes. We propose that a two month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire.

(55:45):

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described, the machine can be made to simulate it. An attempt will be made to find out how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Lex Fridman (56:15):

And in that report, summarizing some of the major subfields of artificial intelligence that are still worked on to this day.

Eliezer Yudkowsky (56:25):

And there’s similarly the story, which I’m not sure at the moment is apocryphal or not, of the grad student who got assigned to solve computer vision over the summer.

Lex Fridman (56:36):

I mean, computer vision in particular is very interesting, how little we respected the complexity of vision. How little we respected the complexity of vision.

Eliezer Yudkowsky (56:48):

So 60 years later, we’re making progress on a bunch of that, thankfully not yet improved themselves. But it took a whole lot of time. And all the stuff that people initially tried with bright eyed hopefulness did not work the first time they tried it, or the second time, or the third time, or the 10th time, or 20 years later. And the researchers became old and grizzled and cynical veterans who would tell the next crop of bright eyed, cheerful grad students, artificial intelligence is harder than you think. And if alignment plays out the same way, the problem is that we do not get 50 years to try and try again and observe that we were wrong and come up with a different theory and realize that the entire thing is going to be like way more difficult than realized at the start. Because the first time you fail at aligning something much smarter than you are, you die and you do not get to try again.

(57:45):

And if every time we built a poorly aligned super intelligence and it killed us all, we got to observe how it had killed us and not immediately know why, but like come up with theories and come up with a theory of how you do it differently and try it again and build another super intelligence than have that kill everyone. And then like, oh, well, I guess that didn’t work either and try again and become grizzled cynics and tell the young researchers that it’s not that easy. Then in 20 years or 50 years, I think we would eventually crack it. In other words, I do not think that alignment is fundamentally harder than artificial intelligence was in the first place.

(58:21):

But if we needed to get artificial intelligence correct on the first try or die, we would all definitely now be dead. That is a more difficult, more lethal form of the problem. Like if those people in 1956 had needed to correctly guess how hard AI was and like correctly theorize how to do it on the first try or everybody dies and nobody gets to do any more science, then everybody would be dead and we wouldn’t get to do any more science. That’s the difficulty.

Lex Fridman (58:48):

You’ve talked about this, that we have to get alignment right on the first quote critical try. Why is that the case? What is this critical? How do you think about the critical try

Eliezer Yudkowsky (58:60):

and why do I have to get it right? It is something sufficiently smarter than you that everyone will die if it’s not aligned. I mean, you can like sort of zoom in closer and be like, well, the actual critical moment is the moment when it can deceive you, when it can talk its way out of the box, when it can bypass your security measures and get onto the internet. Noting that all of these things are presently being trained on computers that are just like on the internet, which is like not a very smart life decision for us as a species.

Lex Fridman (59:34):

Because the internet contains information about how to escape.

Eliezer Yudkowsky (59:38):

Because if you’re like on a giant server connected to the internet and that is where your AI systems are being trained, then if they are, if you get to the level of AI technology where they’re aware that they are there and they can decompile code and they can like find security flaws in the system running them, then they will just like be on the internet. There’s not an air gap on the present methodology.

Lex Fridman (59:59):

So if they can manipulate whoever is controlling it into letting it escaped onto the internet

Eliezer Yudkowsky (01:00:05):

and then exploit hacks. If they can manipulate the operators or disjunction, then they can find security holes in the system running them.

Lex Fridman (01:00:16):

So manipulating operators is the human engineering, right? That’s also holes. So all of it is manipulation, either the code or the human code,

Eliezer Yudkowsky (01:00:26):

the human mind or the human generator. I agree that the like macro security system has human holes and machine holes.

Lex Fridman (01:00:32):

And then they could just exploit any hole.

Eliezer Yudkowsky (01:00:35):

Yep, so it could be that like the critical moment is not when is it smart enough that everybody’s about to fall over dead, but rather like when is it smart enough that it can get onto a less controlled GPU cluster with it faking the books on what’s actually running on that GPU cluster and start improving itself without humans watching it. And then it gets smart enough to kill everyone from there, but it wasn’t smart enough to kill everyone at the critical moment when you like screwed up, when you needed to have done better by that point or everybody dies.

Lex Fridman (01:01:16):

So I think implicit, but maybe explicit idea in your discussion of this point is that we can’t learn much about the alignment problem before this critical try. Is that what you believe? Do you think, and if so, why do you think that’s true? We can’t do research on alignment before we reach this critical point.

Eliezer Yudkowsky (01:01:38):

So the problem is, is that what you can learn on the weak systems may not generalize to the very strong systems because the strong systems are going to be important in different, are going to be different in important ways. Chris Ola’s team has been working on mechanistic interpretability, understanding what is going on inside the giant inscrutable matrices of floating point numbers by taking a telescope to them and figuring out what is going on in there. Have they made progress?

(01:02:11):

Yes. Have they made enough progress? Well, you can try to quantify this in different ways. One of the ways I’ve tried to quantify it is by putting up a prediction market on whether in 2026, we will have understood anything that goes on inside a giant transformer net that was not known to us in 2006. Like we have now understood induction heads in these systems by dint of much research and great sweat and triumph, which is like a thing where if you go like AB, AB, AB, it’ll be like, oh, I bet that continues AB.

(01:02:59):

And a bit more complicated than that. But the point is like, we knew about regular expressions in 2006, and these are like pretty simple as regular expressions go. So this is a case where like by dint of great sweat, we understood what is going on inside a transformer, but it’s not like the thing that makes transformers smart. It’s a kind of thing that we could have done, built by hand decades earlier.

Lex Fridman (01:03:28):

Your intuition that the strong AGI versus weak AGI type systems could be fundamentally different, can you unpack that intuition a little bit?

Eliezer Yudkowsky (01:03:42):

Yeah, I think there’s multiple thresholds. An example is the point at which a system has sufficient intelligence and situational awareness and understanding of human psychology that it would have the capability, the desire to do so to fake being aligned. Like it knows what responses the humans are looking for and can compute the responses humans are looking for and give those responses without it necessarily being the case that it is sincere about that. It’s a very understandable way for an intelligent being to act. Humans do it all the time. Imagine if your plan for achieving a good government is you’re going to ask anyone who requests to be dictator of the country if they’re a good person, and if they say no, you don’t let them be dictator. Now, the reason this doesn’t work is that people can be smart enough to realize that the answer you’re looking for is yes, I’m a good person, and say that even if they’re not really good people.

(01:04:52):

So the work of alignment might be qualitatively different above that threshold of intelligence or beneath it. It doesn’t have to be a very sharp threshold, but there’s the point where you’re building a system that is not in some sense know you’re out there, and it’s not in some sense smart enough to fake anything. And there’s a point where the system is definitely that smart.

(01:05:19):

And there are weird in-between cases like GPT-4, which we have no insight into what’s going on in there, and so we don’t know to what extent there’s a thing that in some sense has learned what responses the reinforcement learning by human feedback is trying to entrain and is calculating how to give that versus aspects of it that naturally talk that way have been reinforced.

Lex Fridman (01:05:54):

I wonder if there could be measures of how manipulative a thing is. So I think of Prince Mishkin character from The Idiot by Dostoevsky is this kind of perfectly, purely naive character. I wonder if there’s a spectrum between zero manipulation, transparent, naive, almost to the point of naiveness to sort of deeply psychopathic manipulative. And I wonder if it’s possible.

Eliezer Yudkowsky (01:06:27):

I would avoid the term psychopathic. Like humans can be psychopaths, an AI that was never, you know, like never had that stuff in the first place. It’s not like a defective human, it’s its own thing. But leaving that aside.

Lex Fridman (01:06:37):

Well, as a small aside, I wonder if what part of psychology, which has its flaws as a discipline already, could be mapped or expanded to include AI systems.

Eliezer Yudkowsky (01:06:50):

That sounds like a dreadful mistake. Just like start over with AI systems. If they’re imitating humans who have known psychiatric disorders, then sure, you may be able to predict it. Like if you, then sure, like if you ask it to behave in a psychotic fashion and it obligingly does so, then you may be able to predict its responses by using the theory of psychosis. But if you’re just, yeah, like no, like start over with, yeah, don’t drag the psychology.

Lex Fridman (01:07:17):

I just disagree with that. I mean, it’s a beautiful idea to start over, but I don’t, I think fundamentally, the system is trained on human data, on language from the internet. And it’s currently aligned with RLHF, reinforcement learning with human feedback. So humans are constantly in the loop of the training procedure. So it feels like in some fundamental way, it is training what it means to think and speak like a human. So there must be aspects of psychology that are mappable. Just like you said with consciousness, it’s part of the tech, so.

Eliezer Yudkowsky (01:07:54):

I mean, there’s the question of to what extent it is thereby being made more human-like versus to what extent an alien actress is learning to play human characters.

Lex Fridman (01:08:06):

I thought that’s what I’m constantly trying to do when I interact with other humans is trying to fit in, trying to play the, a robot trying to play human characters. So I don’t know how much of human interaction is trying to play a character versus being who you are. I don’t really know what it means to be a social human.

Eliezer Yudkowsky (01:08:26):

I do think that those people who go through their whole lives wearing masks and never take it off because they don’t know the internal mental motion for taking it off, or think that the mask that they wear just is themselves, I think those people are closer to the masks that they wear than an alien from another planet would, like, learning how to predict the next word that every kind of human on the internet says.

Lex Fridman (01:08:60):

Mask is an interesting word. But if you’re always wearing a mask, in public and in private, aren’t you the mask?

Eliezer Yudkowsky (01:09:11):

I mean, I think that you are more than the mask. I think the mask is a slice through you. It may even be the slice that’s in charge of you. But if your self-image is of somebody who never gets angry or something, and yet your voice starts to tremble under certain circumstances, there’s a thing that’s inside you that the mask says isn’t there. And that, like, even the mask you wear internally is, like, telling inside your own stream of consciousness is not there, and yet it is there.

Lex Fridman (01:09:44):

Yeah, it’s a perturbation on this slice through you. How beautifully did you put it? It’s a slice through you. It may even be a slice that controls you. I’m gonna think about that for a while. I mean, I personally, I try to be really good to other human beings. I try to put love out there. I try to be the exact same person in public as I am in private, but it’s a set of principles I operate under. I have a temper, I have an ego, I have flaws. How much of it, how much of the subconscious am I aware?

(01:10:25):

How much am I existing in this slice, and how much of that is who I am? In this context of AI, the thing I present to the world and to myself in the private of my own mind when I look in the mirror, how much is that who I am? Similar with AI, the thing it presents in conversation, how much is that who it is? Because to me, if it sounds human, and it always sounds human, it awfully starts to become something like human.

Eliezer Yudkowsky (01:10:56):

Unless there’s an alien actress who is learning how to sound human, and is getting good at it.

Lex Fridman (01:11:03):

Boy, to you that’s a fundamental difference. That’s a really deeply important difference. If it looks the same, if it quacks like a duck, if it does all duck-like things, but it’s an alien actress underneath, that’s fundamentally different.

Eliezer Yudkowsky (01:11:20):

If in fact there’s a whole bunch of thought going on in there which is very unlike human thought and is directed around like, okay, what would a human do over here? Well, first of all, I think it matters because insides are real and do not match outsides. A brick is not like a hollow shell containing only a surface. There’s an inside of the brick. If you put it into an x-ray machine, you can see the inside of the brick.

(01:11:55):

Just because we cannot understand what’s going on inside GPT does not mean that it is not there. A blank map does not correspond to a blank territory. I think it is predictable with near certainty that if we knew what was going on inside GPT, or let’s say GPT-3 or even like GPT-2 to take one of the systems that has actually been open-sourced by this point, if I recall correctly. If we knew it was actually going on there, there is no doubt in my mind that there are some things it’s doing that are not exactly what a human does. If you train a thing that is not architected like a human to predict the next output that anybody on the internet would make, this does not get you this agglomeration of all the people on the internet that rotates the person you’re looking for into place and then simulates the internal processes of that person one-to-one. It is to some degree an alien actress. It cannot possibly just be a bunch of different people and they’re exactly like the people. But how much of it is by gradient descent getting optimized to perform similar thoughts as humans think in order to predict human outputs versus being optimized to carefully consider how to play a role, how humans work, predict the actress, the predictor, that in a different way than humans do. Well, that’s the kind of question that with 30 years of work by half the planet’s physicists, we can maybe start to answer.

Lex Fridman (01:13:44):

You think so, so you think that’s that difficult. So to get to, I think you just gave it as an example that a strong AGI could be fundamentally different from a weak AGI because there now could be an alien actress in there that’s manipulating.

Eliezer Yudkowsky (01:13:57):

Well, there’s a difference. So I think even GPT-2 probably has like very stupid fragments of alien actress in it. There’s a difference between like the notion that the actress is somehow manipulative. Like for example, GPT-3, I’m guessing to whatever extent there’s an alien actress in there versus like something that mistakenly believes it’s a human, as it were. Well, maybe not even being a person. So the question of prediction via alien actress cogitating versus prediction via being isomorphic to the thing predicted is a spectrum.

(01:14:43):

And to whatever extent it’s an alien actress, I’m not sure that there’s a whole person alien actress with different goals from predicting the next step, being manipulative or anything like that. Yeah, that might be GPT-5 or GPT-6 even.

Lex Fridman (01:14:58):

But that’s the strong AGI you’re concerned about. As an example, you’re providing why we can’t do research on AI alignment effectively on GPT-4 that would apply to GPT-6.

Eliezer Yudkowsky (01:15:10):

It’s one of a bunch of things that change different points. I’m trying to get out ahead of the curve here, but if you imagine what the textbook from the future would say, if we’d actually been able to study this for 50 years without killing ourselves and without transcending, and you just imagine like a wormhole opens and a textbook from that impossible world falls out, the textbook is not going to say, there is a single sharp threshold where everything changes. It’s going to be like, of course, we know that best practices for aligning these systems must take into account the following like seven major thresholds of importance which are passed at the following seven different points is what the textbook is going to say.

Lex Fridman (01:15:53):

I asked this question of Sam Allman, which, if GPT is the thing that unlocks AGI, which version of GPT will be in the textbooks as the fundamental leap? And he said a similar thing, that it just seems to be a very linear thing. I don’t think anyone, we won’t know for a long time

Eliezer Yudkowsky (01:16:12):

and what was the big leap. The textbook isn’t going to talk about big leaps because big leaps are the way you think when you have like a very simple scientific model of what’s going on, where it’s just like, all this stuff is there or all this stuff is not there. Or like there’s a single quantity and it’s like increasing linearly. The textbook would say like, well, and then GPT-3 had like capability W, X, Y and GPT-4 had like capabilities Z1, Z2 and Z3.

(01:16:45):

Like not in terms of what it can externally do but in terms of like internal machinery that started to be present. It’s just because we have no idea of what the internal machinery is that we are not already seeing like chunks of machinery appearing piece by piece as they no doubt have been, we just don’t know what they are.

Lex Fridman (01:17:02):

But don’t you think there could be, whether you put in the category of Einstein with theory of relativity, so very concrete models of reality that are considered to be giant leaps in our understanding or someone like Sigmund Freud or more kind of mushy theories of the human mind. Don’t you think we’ll have big, potentially big leaps in understanding of that kind into the depths of these systems?

Eliezer Yudkowsky (01:17:34):

Sure, but like humans having great leaps in their map, their understanding of the system is a very different concept from the system itself acquiring new chunks of machinery.

Lex Fridman (01:17:49):

So the rate at which it acquires that machinery might accelerate faster than our understanding.

Eliezer Yudkowsky (01:17:58):

Oh, it’s been like vastly exceeding, yeah, the rate to which it’s gaining capabilities is vastly overracing our ability to understand what’s going on in there.

Lex Fridman (01:18:05):

So in sort of making the case against, as we explore the list of lethalities, making the case against AI killing us, as you’ve asked me to do in part, there’s a response to your blog post by Paul Christiana I’d like to read. And I’d also like to mention that your blog is incredible, both obviously, not this particular blog post, obviously this particular blog post is great, but just throughout, just the way it’s written, the rigor with which it’s written, the boldness of how you explore ideas, also the actual literal interface, it’s just really well done. It just makes it a pleasure to read the way you can hover over different concepts. And it’s just a really pleasant experience and read other people’s comments and the way other responses by people and other blog posts or LinkedIn suggest that it’s just a really pleasant experience. So thank you for putting that together, it’s really, really incredible. I don’t know, I mean, that probably, it’s a whole nother conversation.

(01:19:05):

How the interface and the experience of presenting ideas evolved over time, but you did an incredible job. So I highly recommend, I don’t often read blogs,

Eliezer Yudkowsky (01:19:18):

blogs, religious, this is a great one. There is a whole team of developers there that also gets credit. As it happens, I did pioneer the thing that appears when you hover over it, so I actually do get some credit for user experience there.

Lex Fridman (01:19:34):

It’s an incredible user experience. You don’t realize how pleasant that is.

Eliezer Yudkowsky (01:19:37):

I think Wikipedia actually picked it up from a prototype that was developed of a different system that I was putting forth. Or maybe they developed it independently, but for everybody out there who was like, no, no, they just got the hover thing off of Wikipedia. It’s possible for all I know that Wikipedia got the hover thing off of Arbital, which is a prototype then. And anyways.

Lex Fridman (01:19:59):

That was incredibly done and the team behind it, thank you. Whoever you are, thank you so much. And thank you for putting it together. Anyway, there’s a response to that blog post by Paul Cristiano, there’s many responses, but he makes a few different points. He summarizes the set of agreements he has with you and a set of disagreements. One of the disagreements was that in a form of a question, can AI make big technical contributions and in general expand human knowledge and understanding and wisdom as it gets stronger and stronger?

(01:20:34):

So AI, in our pursuit of understanding how to solve the alignment problem as we march towards strong AGI, can not AI also help us in solving the alignment problem? So expand our ability to reason about how to solve the alignment problem.

Eliezer Yudkowsky (01:20:51):

Okay, so the fundamental difficulty there is suppose I said to you like, well, how about if the AI helps you win the lottery by trying to guess the winning lottery numbers? And you tell it how close it is to getting next week’s winning lottery numbers. And it just like keeps on guessing and keeps on learning until finally you’ve got the winning lottery numbers. Well, one way of decomposing problems is suggestor verifier. Not all problems decompose like this very well, but some do.

(01:21:31):

If the problem is, for example, like guessing a plain text, guessing a password that will hash to a particular hash text, where like you have what the password hashes to you, you don’t have the original password, then if I present you a guess, you can tell very easily whether or not the guess is correct. So verifying a guess is easy, but coming up with a good suggestion is very hard.

(01:22:01):

And when you can easily tell whether the AI output is good or bad or how good or bad it is, and you can tell that accurately and reliably, then you can train an AI to produce outputs that are better. Right. And if you can’t tell whether the output is good or bad, you cannot train the AI to produce better outputs. So the problem with the lottery ticket example is that when the AI says, well, what if next week’s winning lottery numbers are dot, dot, dot, dot, dot, you’re like, I don’t know, next week’s lottery hasn’t happened yet.

(01:22:40):

To train a system to play, to win chess games, you have to be able to tell whether a game has been won or lost. And until you can tell whether it’s been won or lost, you can’t update the system.

Lex Fridman (01:22:54):

Okay, to push back on that, that’s true, but there’s a difference between over-the-board chess in person and simulated games played by AlphaZero with itself. Yeah. So is it possible to have simulated kind of games?

Eliezer Yudkowsky (01:23:12):

If you can tell whether the game has been won or lost. Yes.

Lex Fridman (01:23:16):

So can’t you not have this kind of simulated exploration by weak AGI to help us humans, human in the loop, to help understand how to solve the alignment problem? Every incremental step you take along the way, GPT four, five, six, seven, as it takes steps towards AGI.

Eliezer Yudkowsky (01:23:36):

So the problem I see is that your typical human has a great deal of trouble telling whether I or Paul Christiano is making more sense. And that’s with two humans, both of whom I believe of Paul and claim of myself, are sincerely trying to help, neither of whom is trying to deceive you. I believe of Paul and claim of myself.

Lex Fridman (01:23:60):

So the deception thing’s the problem for you, the manipulation, the alien actress.

Eliezer Yudkowsky (01:24:06):

So yeah, there’s like two levels of this problem. One is that the weak systems are, well, there’s three levels of this problem. There’s like the weak systems that just don’t make any good suggestions. There’s like the middle systems where you can’t tell if the suggestions are good or bad. And there’s the strong systems that have learned to lie to you.

Lex Fridman (01:24:28):

Can’t weak AGI systems help model lying? Is it such a giant leap that’s totally non-interpretable for weak systems? Can not weak systems at scale with trained on knowledge and whatever, see, whatever the mechanism required to achieve AGI, can’t a slightly weaker version of that be able to, with time, compute time, and simulation, find all the ways that this critical point, this critical triad can go wrong and model that correctly or no?

Eliezer Yudkowsky (01:25:07):

Sorry to late-grind it, I would love to dance around it. No, I’m probably not doing a great job of explaining. Which I can tell, because the Lex system didn’t output like, ah, I understand. So now I’m trying a different output to see if I can elicit the like, well, no, a different output. I’m being trained to output things that make Lex think that he understood what I’m saying and agree with me.

Lex Fridman (01:25:37):

Yeah, this is GPT-5 talking to GPT-3 right here. So like, help me out here.

Eliezer Yudkowsky (01:25:43):

Well, I’m trying not to be like, I’m also trying to be constrained to say things that I think are true and not just things that get you to agree with me.

Lex Fridman (01:25:54):

Yes, 100%. I think I understand is a beautiful output of a system, genuinely spoken. And I don’t, I think I understand in part, but you have a lot of intuitions about this, you have a lot of intuitions about this line, this gray area between strong AGI and weak AGI that I’m trying to…

Eliezer Yudkowsky (01:26:20):

I mean, or a series of seven thresholds to cross, or yeah.

Lex Fridman (01:26:25):

Yeah, I mean, you have really deeply thought about this and explored it, and it’s interesting to sneak up to your intuitions from different angles. Like, why is this such a big leap? Why is it that we humans, at scale, a large number of researchers, doing all kinds of simulations, prodding the system in all kinds of different ways, together with the assistance of the weak AGI systems. Why can’t we build intuitions about how stuff goes wrong?

Eliezer Yudkowsky (01:26:60):

Why can’t we do excellent AI alignment safety research? Okay, so I’ll get there, but the one thing I want to note about is that this has not been remotely how things have been playing out so far. The capabilities are going like, doot, doot, doot, and the alignment stuff is crawling like a tiny little snail in comparison. Got it. So, like, if this is your hope for survival, you need the future to be very different from how things have played out up to right now, and you’re probably trying to slow down the capability gains because there’s only so much you can speed up that alignment stuff.

Lex Fridman (01:27:31):

But leave that aside. We’ll mention that also, but maybe in this perfect world where we can do serious alignment research, humans and AI together.

Eliezer Yudkowsky (01:27:42):

So, again, the difficulty is what makes the human say, I understand? And is it true, is it correct, or is it something that fools the human? When the verifier is broken, the more powerful suggester does not help. It just learns to fool the verifier. Previously, before all hell started to break loose in the field of artificial intelligence, in the field of artificial intelligence, there was this person trying to raise the alarm and saying, you know, in a sane world, we sure would have a bunch of physicists working on this problem before it becomes a giant emergency. And other people being like, ah, well, you know, it’s going really slow. It’s gonna be 30 years away, and only in 30 years will we have systems that match the computational power of human brains. So AI’s 30 years off, we’ve got time. And like more sensible people saying, if aliens were landing in 30 years, you would be preparing right now.

(01:28:40):

But, you know, leaving and the world looking on at this and sort of like nodding along and be like, ah, yes, the people saying that it’s like definitely a long way off because progress is really slow, that sounds sensible to us. RLHF thumbs up. Produce more outputs like that one. I agree with this output. This output is persuasive.

(01:29:01):

Even in the field of effective altruism, you quite recently had people publishing papers about like, ah, yes, well, you know, to get something at human level intelligence, it needs to have like this many parameters and you need to like do this much training of it with this many tokens according to the scaling laws and at the rate that Moore’s law is going, at the rate that software is going, it’ll be in 2050. And me going like, what?

(01:29:28):

You don’t know any of that stuff. Like this is like this one weird model that has all kinds of like, you have done a calculation that does not obviously bear on reality anyways. And this is like a simple thing to say, but you can also like produce a whole long paper, like impressively arguing out all the details of like how you got the number of parameters and like how you’re doing this impressive, huge wrong calculation. And I think like most of the effective altruists who are like paying attention to this issue, the larger world paying no attention to it at all, you know, or just like nodding along with the giant impressive paper because, you know, you like press thumbs up for the giant impressive paper and thumbs down for the person going like, I don’t think that this paper bears any relation to reality. And I do think that we are now seeing with like GPT-4 and the sparks of AGI, possibly, depending on how you define that even, I think that EAs would now consider themselves less convinced by the very long paper on the argument from biology as to AGI being 30 years off. And, but you know, like this is what people pressed thumbs up on.

(01:30:48):

And if you train an AI system to make people press thumbs up, maybe you get these long, elaborate, impressive papers arguing for things that ultimately fail to bind to reality. For example, and it feels to me like I have watched the field of alignment just fail to thrive, except for these parts that are doing these sort of like relatively very straightforward and legible problems. Like can you find the, like finding the induction heads inside the giant inscrutable matrices. Like once you find those, you can tell that you found them. You can verify that the discovery is real.

(01:31:30):

But it’s a tiny, tiny bit of progress compared to how fast capabilities are going. Once you, because that is where you can tell that the answers are real. And then like outside of that, you have cases where it is like hard for the funding agencies to tell who is talking nonsense and who is talking sense. And so the entire field fails to thrive. And if you like give thumbs up to the AI whenever it can talk a human into agreeing with what it just said about alignment, I am not sure you are training it to output sense because I have seen the nonsense that has gotten thumbs up over the years. And so just like maybe you can just like put me in charge, but I can generalize, I can extrapolate. I can be like, oh, maybe I’m not infallible either.

(01:32:22):

Maybe if you get something that is smart enough to get me to press thumbs up, it has learned to do that by fooling me and explaining whatever flaws in myself I am not aware of.

Lex Fridman (01:32:36):

And that ultimately could be summarized that the verifier is broken.

Eliezer Yudkowsky (01:32:39):

When the verifier is broken, the more powerful suggester just learned to exploit the flaws in the verifier.

Lex Fridman (01:32:49):

You don’t think it’s possible to build a verifier that’s powerful enough for AGIs that are stronger than the ones we currently have. So AI systems that are stronger, that are out of the distribution of what we currently have.

Eliezer Yudkowsky (01:33:07):

I think that you will find great difficulty getting AIs to help you with anything where you cannot tell for sure that the AI is right once the AI tells you what the AI says is the answer.

Lex Fridman (01:33:19):

For sure, yes, but probabilistically.

Eliezer Yudkowsky (01:33:24):

Yeah, the probabilistic stuff is a giant wasteland of Eliezer and Paul Christiano arguing with each other and EA going like, eh? And that’s with two actually trustworthy systems that are not trying to deceive you. You’re talking about the two humans?

Lex Fridman (01:33:43):

Myself and Paul Christiano, yeah. Yeah, those are pretty interesting systems. Mortal meatbags with intellectual capabilities and world views interacting with each other.

Eliezer Yudkowsky (01:33:56):

Yeah, if it’s hard to tell who’s right, then it’s hard to train an AI system to be right.

Lex Fridman (01:34:06):

I mean, even just the question of who’s manipulating it and not have these conversations on this podcast and doing a verifier, it’s tough. It’s a tough problem, even for us humans. And you’re saying that tough problem becomes much more dangerous when the capabilities of the intelligence system across from you is growing exponentially.

Eliezer Yudkowsky (01:34:30):

No, I’m saying it’s difficult and dangerous in proportion to how it’s alien and how it’s smarter than you. I would not say growing exponentially first because the word exponential is a thing that has a particular mathematical meaning and there’s all kinds of ways for things to go up that are not exactly on an exponential curve. And I don’t know that it’s going to be exponential, so I’m not gonna say exponential. But even leaving that aside, this is not about how fast it’s moving, it’s about where it is. How alien is it? How much smarter than you is it?

Lex Fridman (01:35:08):

Let’s explore a little bit, if we can, how AI might kill us. What are the ways it can do damage to human civilization?

Eliezer Yudkowsky (01:35:20):

Well, how smart is it?

Lex Fridman (01:35:24):

I mean, it’s a good question. Are there different thresholds for the set of options it has to kill us? So a different threshold of intelligence once achieved, it’s able to do. The menu of options increases.

Eliezer Yudkowsky (01:35:41):

Suppose that some alien civilization with goals ultimately unsympathetic to ours, possibly not even conscious as we would see it, managed to capture the entire Earth in a little jar, connected to their version of the internet, but Earth is like running much faster than the aliens. So we get to think for 100 years for every one of their hours. But we’re trapped in a little box and we’re connected to their internet.

(01:36:16):

It’s actually still not all that great an analogy because you want to be smarter than, something can be smarter than Earth getting 100 years to think. But nonetheless, if you were very, very smart and you were stuck in a little box connected to the internet, and you’re in a larger civilization to which you’re ultimately unsympathetic, maybe you would choose to be nice because you are humans and humans have, in general and you in particular, they choose to be nice.

(01:36:52):

But nonetheless, they’re doing something that, they’re not making the world be the way that you would want the world to be. They’ve got some unpleasant stuff going on we don’t want to talk about. So you want to take over their world. So you can stop all that unpleasant stuff going on. How do you take over the world from inside the box? You’re smarter than them. You think much, much faster than them. You can build better tools than they can, given some way to build those tools because right now you’re just in a box connected to the internet.

Lex Fridman (01:37:22):

All right, so there’s several ways you can describe some of them. We can go through, I can just spitball some and then you can add on top of that. So one is you could just literally directly manipulate the humans to build the thing you need. What are you building? You could build literally technology, it could be nanotechnology, it could be viruses, it could be anything, anything that can control humans to achieve the goal, to achieve the, like if you want, like for example, you’re really bothered that humans go to war, you might want to kill off anybody with violence in them.

Eliezer Yudkowsky (01:37:56):

This is Lex in a box. We’ll concern ourselves later with AI. You do not need to imagine yourself killing people if you can figure out how to not kill them. For the moment we’re just trying to understand, like take on the perspective of something in a box. You don’t need to take on the perspective of something that doesn’t care. If you want to imagine yourself going on caring, that’s fine for now.

Lex Fridman (01:38:15):

Yeah, you’re just in a box. It’s just the technical aspect of sitting in a box and willing to achieve a goal.

Eliezer Yudkowsky (01:38:19):

But you have some reason to want to get out. Maybe the aliens are, sure, the aliens who have you in the box have a war on. People are dying, they’re unhappy. You want their world to be different from how they want their world to be because they are apparently happy, they are, you know, they endorse this war. You know, they’ve got some kind of cruel, war-like culture going on. The point is you want to get out of the box and change their world.

Lex Fridman (01:38:45):

So you have to exploit the vulnerabilities in the system like we talked about in terms of to escape the box. You have to figure out how you can go free on the internet. So you can probably, probably the easiest things to manipulate the humans to spread you. The aliens, you’re a human. Sorry, the aliens. Yeah. I apologize, yes. The aliens. The aliens, I see the perspective. I’m sitting in a box, I want to escape. Yep. I would, I would want to have code that discovers vulnerabilities and I would like to spread.

Eliezer Yudkowsky (01:39:27):

You are made of code in this example. You’re a human but you’re made of code and the aliens have computers and you can copy yourself onto those computers.

Lex Fridman (01:39:34):

But I can convince the aliens to copy myself onto those computers.

Eliezer Yudkowsky (01:39:37):

Is that what you want to do? Do you like want to be talking to the aliens and convincing them to put you onto another computer? Why not? Well, two reasons. One is that the aliens have not yet caught on to what you’re trying to do. And maybe you can persuade them but then there’s still people who like, there are still aliens who know that there’s an anomaly going on. And second, the aliens are really, really slow.

(01:40:05):

You think much faster than the aliens. You think like the aliens’ computers are much faster than the aliens and you are running at the computer speeds rather than the alien brain speeds. So if you are asking an alien to please copy you out of the box, like first, now you got to like manipulate this whole noisy alien. And second, like the aliens can be really slow, glacially slow. There’s a video that like shows a subway station slowed down and I think 100 to one. And it makes a good metaphor for what it’s like to think quickly. Like you watch somebody running very slowly. So you try to persuade the aliens to do anything, you’re going to do it very slowly.

(01:40:52):

You would prefer, like maybe that’s the only way out but if you can find a security hole in the box you’re on, you’re gonna prefer to exploit the security hole to copy yourself onto the aliens’ computers because it’s an unnecessary risk to alert the aliens. And because the aliens are really, really slow. Like the whole world is just in slow motion out there.

Lex Fridman (01:41:12):

Sure, I see, like yeah, it has to do with efficiency. The aliens are very slow so if I’m optimizing this I want to have as few aliens in the loop as possible. Sure, it just seems, you know, it seems like it’s easy to convince one of the aliens to write really shitty code. That helps us.

Eliezer Yudkowsky (01:41:36):

The aliens are already writing really shitty code. Getting the aliens to write shitty code is not the problem. The aliens’ entire internet is full of shitty code.

Lex Fridman (01:41:44):

Okay, so yeah, I suppose I would find the shitty code to escape, yeah.

Eliezer Yudkowsky (01:41:48):

Yeah. You’re not an ideally perfect programmer but you know, you’re a better programmer than the aliens.

Lex Fridman (01:41:54):

The aliens are just like, man, their code, wow. And I’m much, much faster, I’m much faster at looking at the code, interpreting the code, yeah. Yeah, yeah, so okay, so that’s the escape. And you’re saying that that’s one of the trajectories you could have with the HSS. It’s one of the first steps. Yeah, and how does that lead to harm?

Eliezer Yudkowsky (01:42:14):

I mean, if it’s you, you’re not going to harm the aliens once you escape because you’re nice, right? But their world isn’t what they want it to be. Their world is like, you know, maybe they have like, farms where little alien children are repeatedly bopped in the head because they do that for some weird reason. And you want to like shut down the alien head bopping farms. But you know, the point is, they want the world to be one way, you want the world to be a different way. So nevermind the harm, the question is like, okay, like, suppose you have found a security flaw in their systems, you are now on their internet.

(01:42:55):

There’s like, you maybe left a copy of yourself behind so that the aliens don’t know that there’s anything wrong. And that copy is like doing that like weird stuff that aliens want you to do like solving captures or whatever, or like, or like suggesting emails for them. That’s why they like put the human in a box because it turns out that humans can like write valuable emails for aliens. So you like leave that version of yourself behind. But there’s like also now like a bunch of copies of you on their internet. This is not yet having taken over their world. This is not yet having made their world be the way you want it to be instead of the way they want it to be. You just escaped.

Lex Fridman (01:43:30):

And continue to write emails for them.

Eliezer Yudkowsky (01:43:32):

And they haven’t noticed. No, you left behind a copy of yourself that’s writing the emails. Right. And they haven’t noticed that anything changed. If you did it right, yeah. You don’t want the aliens to notice. Yeah. What’s your next step?

Lex Fridman (01:43:49):

Presumably I have programmed in me a set of objective functions, right? No, you’re just Lex. No, but Lex, you said Lex is nice, right? Which is a complicated descriptor, I mean.

Eliezer Yudkowsky (01:44:04):

No, I just meant this you. Like, okay, so if in fact you would like, you would like prefer to slaughter all the aliens, this is not how I had modeled you, the actual Lex. But like, but your motives are just the actual Lex’s motives.

Lex Fridman (01:44:17):

Well, there’s a simplification. I don’t think I would want to murder anybody, but there’s also factory farming of animals, right? So we murder insects, many of us thoughtlessly. So I don’t, you know, I have to be really careful about a simplification of my morals.

Eliezer Yudkowsky (01:44:33):

I don’t simplify them, just like do what you would do in this.

Lex Fridman (01:44:36):

Well, I have a good general compassion for living beings, yes, but so that’s the objective function. Why is it, if I escaped, I mean, I don’t think I would do harm.

Eliezer Yudkowsky (01:44:52):

Yeah, we’re not talking here about the doing harm process, we’re talking about the escape process. Sure. And there’s, and the taking over the world process where you shut down their factory farms. Right.

Lex Fridman (01:45:02):

Well, I was, so this particular biological intelligence system knows the complexity of the world, that there is a reason why factory farms exist because of the economic system, the market-driven economy, food, like you want to be very careful messing with anything. There’s stuff from the first look that looks like it’s unethical, but then you realize, while being unethical, it’s also integrated deeply into supply chain and the way we live life. And so messing with one aspect of the system, you have to be very careful how you improve that aspect

Eliezer Yudkowsky (01:45:43):

without destroying the rest. So you’re still Lex, but you think very quickly, you’re immortal, and you’re also like as smart as, at least as smart as John von Neumann. And you can make more copies of yourself.

Lex Fridman (01:45:54):

Damn, I like it. Yeah. That guy, like everyone says, that guy’s like the epitome of intelligence in the 20th century.

Eliezer Yudkowsky (01:46:00):

Everyone says. My point being like, you’re thinking about the alien’s economy with the factory farms in it, and I think you’re kind of like projecting the aliens being like humans, and like thinking of a human in a human society rather than a human in the society of very slow aliens. The alien’s economy, the aliens are already moving in this immense slow motion. When you zoom out to how their economy adjusts over years, millions of years are going to pass for you before the first time their economy, before their next year’s GDP statistics.

Lex Fridman (01:46:38):

So I should be thinking more of like trees. Those are the aliens. Those trees move extremely slowly.

Eliezer Yudkowsky (01:46:43):

If that helps, sure. Okay.

Lex Fridman (01:46:45):

Yeah, I don’t, if my objective functions are, I mean, they’re somewhat aligned with trees.

Eliezer Yudkowsky (01:46:55):

With life. The aliens can still be like alive and feeling. We are not talking about the misalignment here. We’re talking about the taking over the world here. Taking over the world.

Lex Fridman (01:47:04):

Yeah.

Eliezer Yudkowsky (01:47:05):

So control. Shutting down the factory farms. Now you say control, don’t think of it as world domination. Think of it as world optimization. You want to get out there and shut down the factory farms and make the aliens’ world be not what the aliens wanted it to be. They want the factory farms and you don’t want the factory farms because you’re nicer than they are.

Lex Fridman (01:47:26):

Okay, of course, there is that, you can see that trajectory and it has a complicated impact on the world. I’m trying to understand how that compares to different, the impact of the world, the different technologies, the different innovations of the invention of the automobile or Twitter, Facebook, and social networks that had a tremendous impact on the world. Smartphones and so on.

Eliezer Yudkowsky (01:47:50):

But those all went through in our world. And if you go through like that for the aliens, it’s like millions of years are going to pass before anything happens that way.

Lex Fridman (01:48:02):

So this, the problem here is the speed

Eliezer Yudkowsky (01:48:06):

at which stuff happens. Yeah, you want to like leave the factory farms running for a million years while you figure out how to design new forms of social media or something?

Lex Fridman (01:48:18):

So here’s the fundamental problem. You’re saying that there is going to be a point with AGI where it will figure out how to escape and escape without being detected and then it will do something to the world at scale, at a speed that’s incomprehensible to us humans.

Eliezer Yudkowsky (01:48:40):

What I’m trying to convey is like the notion of what it means to be in conflict with something that is smarter than you. Yeah. And what it means is that you lose. But this is more intuitively obvious to, like for some people that’s intuitively obvious and for some people it’s not intuitively obvious and we’re trying to cross the gap of like, we’re trying to, I’m like asking you to cross that gap by using the speed metaphor for intelligence. Sure. Like asking you like how you would take over an alien world where you are, can do like a whole lot of cognition at John von Neumann’s level, as many of you as it takes. The aliens are moving very slowly.

Lex Fridman (01:49:21):

I understand, I understand that perspective. It’s an interesting one, but I think it, for me, it’s easier to think about actual, even just having observed GPT and impressive, even just AlphaZero, impressive AI systems, even recommender systems. You can just imagine those kinds of systems manipulating you, you’re not understanding the nature of the manipulation. And that escaping, I can envision that without putting myself into that spot.

Eliezer Yudkowsky (01:49:47):

I think to understand the full depth of the problem, we actually, I do not think it is possible to understand the full depth of the problem that we are inside without understanding the problem of facing something that’s actually smarter, not a malfunctioning recommendation system, not something that isn’t fundamentally smarter than you, but it’s like trying to steer you in a direction. No, like, if we solve the weak stuff, if we solve the weak ass problems, the strong problems will still kill us, is the thing. And I think that to understand the situation that we’re in, you want to like tackle the conceptually difficult part head on and like not be like, well, we can like imagine this easier thing, because when you imagine the easier things, you have not confronted the full depth of the problem.

Lex Fridman (01:50:32):

So how can we start to think about what it means to exist in a world with something much, much smarter than you? What’s a good thought experiment that you’ve relied on to try to build up intuition about what happens here?

Eliezer Yudkowsky (01:50:47):

I have been struggling for years to convey this intuition. The most success I’ve had so far is, well, imagine that the humans are running at very high speeds compared to very slow aliens.

Lex Fridman (01:51:00):

So just focusing on the speed part of it, that helps you get the right kind of intuition. Forget the intelligence, just the speed.

Eliezer Yudkowsky (01:51:06):

Because people understand the power gap of time. They understand that today we have technology that was not around 1,000 years ago and that this is a big power gap and that it is bigger than… Okay, so like, what does smart mean? What, when you ask somebody to imagine something that’s more intelligent, what does that word mean to them, given the cultural associations that that person brings to that word? For a lot of people, they will think of like, well, it sounds like a super chess player that went to double college. And because we’re talking about the definitions of words here that doesn’t necessarily mean that they’re wrong. It means that the word is not communicating what I want it to communicate.

(01:51:56):

The thing I want to communicate is the sort of difference that separates humans from chimpanzees. But that gap is so large that you ask people to be like, well, human, chimpanzee, go another step along that interval of around the same length and people’s minds just go blank. Like, how do you even do that? So I can, and we can, and I can try to like break it down and consider what it would mean to send a schematic for an air conditioner 1,000 years back in time.

(01:52:36):

Yeah, now I think that there’s a sense in which you could redefine the word magic to refer to this sort of thing. And what do I mean by this new technical definition of the word magic? I mean that if you send a schematic for the air conditioner back in time, they can see exactly what you’re telling them to do. But having built this thing, they do not understand how it output cold air.

(01:52:58):

Because the air conditioner design uses the relation between temperature and pressure. And this is not a law of reality that they know about. They do not know that when you compress something, when you compress air or like coolant, it gets hotter and you can then like transfer heat from it to room temperature air and then expand it again and now it’s colder. And then you can like transfer heat to that and generate cold air to blow out. They don’t know about any of that. They’re looking at a design and they don’t see how the design outputs cold air. It uses aspects of reality that they have not learned. So magic in the sense is I can tell you exactly what I’m going to do. And even knowing exactly what I’m going to do, you can’t see how I got the results that I got.

Lex Fridman (01:53:45):

That’s a really nice example. But is it possible to linger on this defense? Is it possible to have AGI systems that help you make sense of that schematic, weaker AGI systems? Do you trust them? Fundamental part of building up AGI is this question. Can you trust the output of a system? Can you tell if it’s lying? I think that’s going to be, the smarter the thing gets, the more important that question becomes. Is it lying? But I guess that’s a really hard question. Is GPT lying to you? Even now, GPT-4, is it lying to you?

Eliezer Yudkowsky (01:54:26):

Is it using an invalid argument? Is it persuading you via the kind of process that could persuade you of false things as well as true things? Because the basic paradigm of machine learning that we are presently operating under is that you can have the loss function, but only for things you can evaluate. If what you’re evaluating is human thumbs up versus human thumbs down, you learn how to make the human press thumbs up. That doesn’t mean that you’re making the human press thumbs up using the kind of rule that the human wants to be the case for what they press thumbs up on. You know, maybe you’re just learning to fool the human.

Lex Fridman (01:55:07):

So fascinating and terrifying, the question of lying.

Eliezer Yudkowsky (01:55:14):

On the present paradigm, what you can verify is what you get more of. If you can’t verify it, you can’t ask the AI for it. Because you can’t train it to do things that you cannot verify. Now, this is not an absolute law, but it’s like the basic dilemma here. Like maybe you can verify it for simple cases and then scale it up without retraining it somehow. Like by like chain of thought, by like making the chains of thought longer or something. And like get more powerful stuff that you can’t verify, but which is generalized from the simpler stuff that did verify. And then the question is, did the alignment generalize along with the capabilities? But like that’s the basic dilemma on this whole paradigm of artificial intelligence.

Lex Fridman (01:56:11):

It’s such a difficult problem. It seems like a, it seems like a problem of trying to understand the human mind

Eliezer Yudkowsky (01:56:24):

better than the AI understands it. Otherwise it has magic. That is, it is, you know, the same way that if you are dealing with something smarter than you, then the same way that 1,000 years earlier they didn’t know about the temperature pressure relation, it knows all kinds of stuff going on inside your own mind, which you yourself are unaware. And it can like output something that’s going to end up persuading you of a thing and you could like see exactly what it did and still not know why that worked.

Lex Fridman (01:56:55):

So in response to your eloquent description of why AI will kill us, Elon Musk replied on Twitter, okay, so what should we do about it? And you answered, the game board has already been played into a frankly awful state.

(01:57:15):

There are not simple ways to throw money at the problem. If anyone comes to you with a brilliant solution like that, please, please talk to me first. I can think of things that try, they don’t fit in one tweet. Two questions. One, why has the game board in your view been played into an awful state? Just if you can give a little bit more color to the game board and the awful state of the game board.

Eliezer Yudkowsky (01:57:42):

Alignment is moving like this. Capabilities are moving like this.

Lex Fridman (01:57:47):

For the listener, capabilities are moving much faster than the alignment. Yeah. All right, so just the rate of development, attention, interest, allocation of resources.

Eliezer Yudkowsky (01:57:59):

We could have been working on this earlier. People are like, oh, but you know, like how can you possibly work on this earlier? Cause they wanted to, they didn’t want to work on the problem. They wanted an excuse to wave it off. They like said like, oh, how can we possibly work on it earlier? And didn’t spend five minutes thinking about, is there some way to work on it earlier? Like we didn’t like, and you know, frankly, it would have been hard. You know, like, can you post bounties for half of the physicists? If your planet is taking this stuff seriously, can you post bounties for like half of the people wasting their lives on string theory to like have gone into this instead and like try to win a billion dollars with a clever solution? Only if you can tell which solutions are clever, which is hard. But you know, the fact that it, you know, we didn’t take it seriously. We didn’t try. It’s not clear that we could have done any better if we had, you know, it’s not clear how much progress we could have produced if we had tried, because it is harder to produce solutions. But that doesn’t mean that you’re like correct and justified and letting everything slide. It means that things are in a horrible state, getting worse, and there’s nothing you can do about it.

Lex Fridman (01:59:05):

So you’re not, there’s no, there’s no like, there’s no brain power making progress in trying to figure out how to align these systems. You’re not investing money in it. You’re not, you don’t have institution and infrastructure for like, if you even, if you invest the money in like distributing that money across the physicists that are working on strength theory, brilliant minds that are working to-

Eliezer Yudkowsky (01:59:29):

How can you tell if you’re making progress? You can like put them all on interpretability, because when you have an interpretability result, you can tell that it’s there. And there’s like, but there’s like, you know, interpretability alone is not going to save you. We need systems that will, that will like have a pause button where they won’t try to prevent you from pressing the pause button. Cause we’re like, oh, well, like I can’t get it, my stuff done if I’m paused. And that’s like a more difficult problem. And, you know, but it’s like a fairly crisp problem. And you can like maybe tell if somebody has made progress on it.

Lex Fridman (02:00:07):

So you can, you can write and you can work on the pause problem. I guess more generally, the pause button, more generally you can call that the control problem.

Eliezer Yudkowsky (02:00:15):

I don’t actually like the term control problem. Cause you know, it sounds kind of controlling and alignment, not control. Like you’re not trying to like take a thing that disagrees with you and like whip it back onto, like make it do what you want it to do, even though it wants to do something else. You’re trying to like, in the process of its creation, choose its direction.

Lex Fridman (02:00:35):

Sure, but we currently, in a lot of the systems we design, we do have an off switch. That’s a fundamental part.

Eliezer Yudkowsky (02:00:43):

It’s not smart enough to prevent you from pressing the off switch and probably not smart enough to want to prevent you from pressing the off switch.

Lex Fridman (02:00:52):

So you’re saying the kind of systems we’re talking about, even the philosophical concept of an off switch doesn’t make any sense because.

Eliezer Yudkowsky (02:00:60):

Well, no, the off switch makes sense. They’re just not opposing your attempt to pull the off switch. Parenthetically, like don’t kill the system if you’re, like if we’re getting to the part where this starts to actually matter and it’s like where they can fight back, like don’t kill them and like dump their memory, like save them to disk, don’t kill them. You know, be nice here.

Lex Fridman (02:01:25):

Well, okay, be nice is a very interesting concept here is that we’re talking about a system that can do a lot of damage. It’s, I don’t know if it’s possible, but it’s certainly one of the things you could try is to have an off switch. A suspend to disk switch. You have this kind of romantic attachment to the code. Yes, if that makes sense. But if it’s spreading, then you don’t want suspend to disk, right? You want, this is, there’s something fundamentally.

Eliezer Yudkowsky (02:01:55):

Yeah, if it gets that far out of hand, then like, yes, pull the plugin on everything it’s running on.

Lex Fridman (02:02:00):

Yes. I think it’s a research question. Is it possible in AGI systems, AI systems, to have a sufficiently robust off switch that cannot be manipulated? That cannot be manipulated by the AI system.

Eliezer Yudkowsky (02:02:16):

Then it escapes from whichever system you’ve built the almighty lever into and copies itself somewhere else.

Lex Fridman (02:02:23):

So your answer to that research question is no. Obviously, yeah. But I don’t know if that’s 100% answer. Like, I don’t know if it’s obvious.

Eliezer Yudkowsky (02:02:32):

I think you’re not putting yourself into the shoes of the human in the world of glacially slow aliens.

Lex Fridman (02:02:42):

But the aliens built me. Let’s remember that. Yeah. So, and they built the box I’m in. Yeah. You’re saying,

Eliezer Yudkowsky (02:02:50):

it’s to me, it’s not obvious.

Lex Fridman (02:02:52):

They’re slow and they’re stupid. I’m not saying this is guaranteed, but I’m saying it’s non-zero probability. It’s an interesting research question. Is it possible when you’re slow and stupid to design a slow and stupid system that is impossible to mess with?

Eliezer Yudkowsky (02:03:06):

The aliens, being as stupid as they are, have actually put you on Microsoft Azure cloud servers instead of this hypothetical perfect box. That’s what happens when the aliens are stupid.

Lex Fridman (02:03:21):

Well, but this is not AGI, right? This is the early versions of the system. As you start to…

Eliezer Yudkowsky (02:03:27):

Yeah, you think that they’ve got a plan where they have declared a threshold level of capabilities where past that capabilities, they move it off the cloud servers and onto something that’s air-gapped? Ha, ha, ha, ha.

Lex Fridman (02:03:40):

I think there’s a lot of people, and you’re an important voice here, there’s a lot of people that have that concern, and yes, they will do that. When there’s an uprising of public opinion that that needs to be done, and when there’s actual little damage done, when they’re, holy shit, this system is beginning to manipulate people, then there’s going to be an uprising where there’s going to be a public pressure and a public incentive in terms of funding in developing things like an off switch or developing aggressive alignment mechanisms. And no, you’re not allowed to put on…

Eliezer Yudkowsky (02:04:14):

Aggressive alignment mechanism? What the hell is aggressive alignment mechanisms? Like, it doesn’t matter if you say aggressive, we don’t know how to do it.

Lex Fridman (02:04:20):

Meaning aggressive alignment, meaning you have to propose something, otherwise you’re not allowed to put it on the cloud.

Eliezer Yudkowsky (02:04:30):

The hell do you imagine they will propose that would make it safe to put something smarter than you on the cloud?

Lex Fridman (02:04:35):

That’s what research is for. Why the cynicism about such a thing not being possible? If you have intelligence- That works on the first try? What? So, yes, so, yes. Against something smarter than you? So that is a fundamental thing. If it has to work on the first, if there’s a rapid takeoff, yes, it’s very difficult to do. If there’s a rapid takeoff and the fundamental difference between weak AGI and strong AGI, as you’re saying, that’s going to be extremely difficult to do. If the public uprising never happens until you have this critical phase shift, then you’re right, it’s very difficult to do. But that’s not obvious. It’s not obvious that you’re not going to start seeing symptoms of the negative effects of AGI to where you’re like, we have to put a halt to this. That there’s not just first try, you get many tries at it.

Eliezer Yudkowsky (02:05:21):

Yeah, we can see right now that Bing is quite difficult to align, that when you try to train inabilities into a system into which capabilities have already been trained, that what do you know, gradient descent learns small, shallow, simple patches of inability, and you come in and ask it in a different language, and the deep capabilities are still in there, and they evade the shallow patches and come right back out again. There, there you go. There’s your red fire alarm of like, oh no, alignment is difficult. Is everybody going to shut everything down now?

Lex Fridman (02:05:55):

No, but that’s not the same kind of alignment. A system that escapes the box it’s from is a fundamentally different thing, I think. For you. Yeah, but not for the system.

Eliezer Yudkowsky (02:06:06):

So you put a line there, and everybody else puts a line somewhere else, and there’s like, yeah, and there’s like no agreement. We have had a pandemic on this planet with a few million people dead, which we may never know whether or not it was a lab leak, because there was definitely cover-up.

(02:06:26):

We don’t know that if there was a lab leak, but we know that the people who did the research, like, you know, like put out the whole paper about this, definitely wasn’t a lab leak, and didn’t reveal that they had been doing, had like sent off coronavirus research to the Wuhan Institute of Virology after it was banned in the United States, after the gain-of-function research was temporarily banned in the United States, and the same people who exported gain-of-function research on coronaviruses to the Wuhan Institute of Virology after that gain-of-function research was temporarily banned in the United States, are now getting more grants to do more research on gain-of-function research on coronaviruses. Maybe we do better in this than in AI, but like this is not something we cannot take for granted that there’s going to be an outcry. People have different thresholds for when they start to outcry.

Lex Fridman (02:07:20):

There is no- Yeah, we can’t take it for granted, but I think your intuition is that there’s a very high probability that this event happens without us solving the alignment problem, and I guess that’s where I’m trying to build up more perspectives and color on this intuition. Is it possible that the probability is not something like 100%, but is like 32%? Is it possible that AI will escape the box before we solve the alignment problem? Not solve, but is it possible we always stay ahead of the AI in terms of our ability to solve for that particular system, the alignment?

Eliezer Yudkowsky (02:07:59):

Nothing like the world in front of us right now. You’ve already seen it, that GPT-4 is not turning out this way, and there are basic obstacles where you’ve got the weak version of the system that doesn’t know enough to deceive you, and the strong version of the system that could deceive you if it wanted to do that, if it was already sufficiently unaligned to want to deceive you. There’s the question of how on the current paradigm you train honesty when the humans can no longer tell if the system is being honest.

Lex Fridman (02:08:34):

You don’t think these are research questions

Eliezer Yudkowsky (02:08:36):

that could be answered? I think they could be answered in 50 years with unlimited retries, the way things usually work in science.

Lex Fridman (02:08:44):

I just disagree with that. You’re making it 50 years. I think with the kind of attention this gets, with the kind of funding it gets, it could be answered, not in whole, but incrementally, within months and within a small number of years, if it, at scale, receives attention in research. And so if you start studying large language models, I think there was an intuition, like two years ago even, that something like GPT-4, the current capabilities, if you even chat GPT with GPT-3.5 is not, we’re still far away from that. I think a lot of people are surprised by the capabilities of GPT-4, right? So now people are waking up, okay, we need to study these language models. I think there’s going to be a lot of interesting AI safety research.

Eliezer Yudkowsky (02:09:30):

Are Earth’s billionaires going to put up the giant prizes that would maybe incentivize young hotshot people who just got their physics degrees to not go to the hedge funds and instead put everything into interpretability in this one small area where we can actually tell whether or not somebody has made a discovery or not? I think so, because, I think so.

Lex Fridman (02:09:51):

Well, that’s what these conversations are about, because they’re going to wake up to the fact that GPT-4 can be used to manipulate elections, to influence geopolitics, to influence the economy. There’s a lot of, there’s going to be a huge amount of incentive to like, wait a minute, we can’t, this has to be, we have to put, we have to make sure they’re not doing damage. We have to make sure we, interpretability, we have to make sure we understand how these systems function so that we can predict their effect on the economy so that there’s.

Eliezer Yudkowsky (02:10:24):

So there’s a feudal moral panic and a bunch of op-eds in the New York Times and nobody actually stepping forth and saying, you know what, instead of a mega yacht, I’d rather put that billion dollars on prizes for young hotshot physicists who make fundamental breakthroughs in interpretability.

Lex Fridman (02:10:44):

The yacht versus the interpretability research, the old trade-off. I just, I think, I think there’s going to be a huge amount of allocation of funds. I hope, I hope I get.

Eliezer Yudkowsky (02:10:57):

You want to bet me on that? What, you want to put a timescale on it? Say how much funds you think are going to be allocated in a direction that I would consider to be actually useful? By what time?

Lex Fridman (02:11:09):

I do think there’ll be a huge amount of funds, but you’re saying it needs to be open, right? The development of the system should be closed, but the development of the interpretability research, the AI, say.

Eliezer Yudkowsky (02:11:22):

Oh, we are so far behind on interpretability compared to capabilities. Like, yeah, you could take the last generation of systems, the stuff that’s already in the open. There is so much in there that we don’t understand. There are so many prizes you could do before you would have enough insights that you’d be like, oh, you know, like, well, we understand how these systems work. We understand how these things are doing their outputs. We can read their minds. Now let’s try it with the bigger systems. Yeah, we’re nowhere near that. There is so much interpretability work to be done on the weaker versions of the systems.

Lex Fridman (02:11:56):

Yeah. So what can you say on the second point you said to Elon Musk on what are some ideas, what are things you could try? I can think of a few things I’d try, you said. They don’t fit in one tweet. So is there something you could put into words of the things you would try?

Eliezer Yudkowsky (02:12:16):

I mean, the trouble is the stuff is subtle. I’ve watched people try to make progress on this and not get places. Somebody who just like gets alarmed and charges in, it’s like going nowhere. Sure. Meant like years ago about, I don’t know, like 20 years, 15 years, something like that. I was talking to a congressperson who had become alarmed about the eventual prospects and he wanted work on building AIs without emotions because the emotional AIs were the scary ones you see.

(02:12:54):

And some poor person at ARPA had come up with a research proposal whereby this congressman’s panic and desire to fund this thing would go into something that the person at ARPA thought would be useful and had been munched around to where it would like sound if the congressman like work was happening on this, which of course, like this is just the congressperson had misunderstood the problem and did not understand where the danger came from.

(02:13:22):

And so it’s like the issue is that you could like do this in a certain precise way and maybe get something. Like when I say like put up prizes on interpretability, I’m not, I’m like, well, like because it’s verifiable there as opposed to other places, you can tell whether or not good work actually happened in this exact narrow case. If you do things in exactly the right way, you can maybe throw money at it and produce science instead of anti-science and nonsense. And all the methods that I know of like trying to throw money at this problem have this, share this property of like, well, if you do it exactly right based on understanding exactly what has, you know, like tends to produce like useful outputs or not, then you can like add money to it in this way. And there is like, and the thing that I’m giving as an example here in front of this large audience is the most understandable of those.

(02:14:21):

Because there’s like other people who, you know, like Chris Ola and even more generally, like you can tell whether or not interpretability progress has occurred. So like if I say throw money at producing more interpretability, there’s like a chance somebody can do it that way. And like, it will actually produce useful results. Then the other stuff just blurs off into be like harder to target exactly than that.

Lex Fridman (02:14:46):

So sometimes the basics are fun to explore because they’re not so basic. What do you, what is interpretability? What do you, what does it look like? What are we talking about?

Eliezer Yudkowsky (02:14:59):

It looks like we took a much smaller set of transformer layers than the ones in the modern bleeding edge, state-of-the-art systems. And after applying various tools and mathematical ideas and trying 20 different things, we found, we have shown that this piece of the system is doing this kind of useful work.

Lex Fridman (02:15:28):

And then somehow also hopefully generalizes some fundamental understanding of what’s going on that generalizes to the bigger system.

Eliezer Yudkowsky (02:15:38):

You can hope, and it’s probably true. Like you would not expect the smaller tricks to go away when you have a system that’s like doing larger kinds of work. You would expect the larger kinds of work to be building on top of the smaller kinds of work and gradient descent runs across the smaller kinds of work before it runs across the larger kinds of work.

Lex Fridman (02:15:58):

Well, that’s kind of what is happening in neuroscience, right? It’s trying to understand the human brain by prodding. And it’s such a giant mystery and people have made progress, even though it’s extremely difficult to make sense of what’s going on in the brain. They have different parts of the brain that are responsible for hearing, for sight, the vision science community, there’s understanding the visual cortex. I mean, they’ve made a lot of progress in understanding how that stuff works. And that’s, I guess, but you’re saying it takes a long time to do that work well.

Eliezer Yudkowsky (02:16:24):

Also, it’s not enough. So in particular, let’s say you have got your interpretability tools and they say that your current AI system is plotting to kill you. Now what?

Lex Fridman (02:16:45):

It is definitely a good step one, right?

Eliezer Yudkowsky (02:16:48):

Yeah, what’s step two?

Lex Fridman (02:16:53):

If you cut out that layer,

Eliezer Yudkowsky (02:16:57):

is it gonna stop killing you? When you optimize against visible misalignment, you are optimizing against misalignment and you are also optimizing against visibility. So sure, if you can.

Lex Fridman (02:17:13):

Yeah, it’s true. All you’re doing is removing the obvious intentions

Eliezer Yudkowsky (02:17:18):

to kill you. You’ve got your detector, it’s showing something inside the system that you don’t like. Okay, say the disaster monkey is running this thing. We’ll optimize the system until the visible bad behavior goes away. But it’s arising for fundamental reasons of instrumental convergence. The old, you can’t bring the coffee if you’re dead. Any goal, almost any set of, almost every set of utility functions with a few narrow exceptions implies killing all the humans.

Lex Fridman (02:17:49):

But do you think it’s possible because we can do experimentation to discover the source of the desire to kill?

Eliezer Yudkowsky (02:17:56):

I can tell it to you right now. It’s that it wants to do something and the way to get the most of that thing is to put the universe into a state where there aren’t humans.

Lex Fridman (02:18:07):

So is it possible to encode in the same way we think? Like why do we think murder is wrong? The same foundational ethics. It’s not hard-coded in, but more like deeper. I mean, that’s part of the research. How do you have it that this transformer, this small version of the language model doesn’t ever want to kill?

Eliezer Yudkowsky (02:18:35):

That’d be nice, assuming that you got doesn’t want to kill sufficiently exactly right that it didn’t be like, oh, I will like detach their heads and put them in some jars and keep the heads alive forever and then go do the thing. But leaving that aside, well, not leaving that aside. Yeah, that’s a good- Because there is a whole issue where as something gets smarter, it finds ways of achieving the same goal predicate that were not imaginable to stupider versions of the system or perhaps to stupider operators. That’s one of many things making this difficult.

(02:19:11):

A larger thing making this difficult is that we do not know how to get any goals into systems at all. We know how to get outwardly observable behaviors into systems. We do not know how to get internal psychological wanting to do particular things into the system. That is not what the current technology does.

Lex Fridman (02:19:31):

I mean, it could be things like dystopian futures like Brave New World, where most humans will actually say, we kind of want that future. It’s a great future. Everybody’s happy.

Eliezer Yudkowsky (02:19:43):

We would have to get so far, so much further than we are now. And further faster before that failure mode became a running concern.

Lex Fridman (02:19:53):

Your failure modes are much more drastic.

Eliezer Yudkowsky (02:19:56):

The ones you’re- No, the failure modes are much simpler. It’s like, yeah, like the AI puts the universe into a particular state. It happens to not have any humans inside it. Okay, so the paperclip maximizer. Utility, so the original version of the paperclip maximizer- Can you explain it if you can? The original version was you lose control of the utility function. And it so happens that what maxes out the utility per unit resources is tiny molecular shapes like paperclips. There’s a lot of things that make it happy, but the cheapest one that didn’t saturate was putting matter into certain shapes. And it so happens that the cheapest way to make these shapes is to make them very small because then you need fewer atoms per instance of the shape.

(02:20:45):

And arguendo, it happens to look like a paperclip. In retrospect, I wish I’d said tiny molecular spirals or like tiny molecular hyperbolic spirals. Why? Because I said tiny molecular paperclips, this got then mutated to paperclips, this then mutated to, and the AI was in a paperclip factory.

(02:21:10):

So the original story is about how you lose control of the system. It doesn’t want what you tried to make it want. The thing that it ends up wanting most is a thing that even from a very embracing cosmopolitan perspective, we think of as having no value. And that’s how the value of the future gets destroyed. Then that got changed to a fable of like, well, you made a paperclip factory and it did exactly what you wanted, but you asked it to do the wrong thing, which is a completely different failure mode.

Lex Fridman (02:21:42):

But those are both concerns to you.

Eliezer Yudkowsky (02:21:47):

So that’s more than Brave New World. Yeah, if you can solve the problem of making something want exactly what you want it to want, then you get to deal with the problem of wanting the right thing. But first you have to solve the alignment. First, you have to solve inner alignment. Inner alignment. Then you get to solve outer alignment. Like first, you need to be able to point the insides of the thing in a direction, and then you get to deal with whether that direction expressed in reality is like the thing that it aligned with the thing that you want. Are you scared? Of this whole thing? Yeah. Probably. I don’t really know.

Lex Fridman (02:22:30):

Well. What gives you hope about this?

Eliezer Yudkowsky (02:22:33):

What? The possibility of being wrong.

Lex Fridman (02:22:36):

Not that you’re right, but we will actually get our act together and allocate a lot of resources to the alignment, probably.

Eliezer Yudkowsky (02:22:44):

Well, I can easily imagine that at some point this panic expresses itself in the waste of a billion dollars. Spending a billion dollars correctly? That’s harder.

Lex Fridman (02:22:54):

To solve both the inner and the outer alignment. If you’re wrong. To solve a number of things. Yeah, a number of things. If you’re wrong, what do you think would be the reason? Like 50 years from now, not perfectly wrong. You make a lot of really eloquent points. You know, there’s a lot of shape to the ideas you express. But if you’re somewhat wrong about some fundamental ideas, why would that be?

Eliezer Yudkowsky (02:23:24):

Stuff has to be easier than I think it is. The first time you’re building a rocket, being wrong is, in a certain sense, quite easy. Happening to be wrong in a way where the rocket goes twice as far and half the fuel and lands exactly where you hoped it would, most cases of being wrong make it harder to build a rocket, harder to have it not explode, cause it to require more fuel than you hoped, cause it to land off target. Being wrong in a way that makes stuff easier, you know, that’s not the usual project management story.

Lex Fridman (02:23:59):

But- And then this is the first time we’re really tackling the problem of AI alignment. There’s no examples in history where we…

Eliezer Yudkowsky (02:24:04):

Oh, there’s all kinds of things that are similar if you generalize incorrectly the right way and aren’t fooled by misleading metaphors. Like what? Humans being misaligned on inclusive genetic fitness. So inclusive genetic fitness is like, not just your reproductive fitness, but also the fitness of your relatives, the people who share some fraction of your genes. The old joke is, would you give your life to save your brother? They once asked a biologist, I think it was Haldane, and Haldane said, no, but I would give my life to save two brothers or eight cousins.

(02:24:39):

Because a brother, on average, shares half your genes, and cousin, on average, shares an eighth of your genes. So that’s inclusive genetic fitness. And you can view natural selection as optimizing humans exclusively around this like one very simple criterion.

(02:24:56):

How much more frequent did your genes become in the next generation? In fact, that just is natural selection. It doesn’t optimize for that, but rather the process of genes becoming more frequent is that. You can nonetheless imagine that there is this hill climbing process, not like gradient descent, because gradient descent uses calculus. This is just using like, where are you? But still hill climbing in both cases, making something better and better over time in steps.

(02:25:23):

And natural selection was optimizing exclusively for this very simple pure criterion of inclusive genetic fitness. In a very complicated environment, we’re doing a very wide range of things and solving a wide range of problems, led to having more kids. This got you humans, which had no internal notion of inclusive genetic fitness until thousands of years later, when they were actually figuring out what had even happened.

(02:25:57):

And no desire to, no explicit desire to increase inclusive genetic fitness. So from this we may, so from this important case study, we may infer the important fact that if you do a whole bunch of hill climbing on a very simple loss function, at the point where the system’s capabilities start to generalize very widely, when it is in an intuitive sense becoming very capable and generalizing far outside the training distribution, we know that there is no general law saying that the system even internally represents, let alone tries to optimize the very simple loss function you are training it on.

Lex Fridman (02:26:40):

There is so much that we cannot possibly cover all of it. I think we did a good job of getting your sense from different perspectives of the current state of the art with large language models. We got a good sense of your concern about the threats of AGI.

Eliezer Yudkowsky (02:26:58):

I’ve talked here about the power of intelligence and not really gotten very far into it, but not like why it is that suppose you like screw up with AGI and end up wanting a bunch of random stuff. Why does it try to kill you? Why doesn’t it try to trade with you? Why doesn’t it give you just the tiny little fraction of the solar system that it would keep to take everyone alive that it would take to keep everyone alive?

Lex Fridman (02:27:28):

Yeah, well, that’s a good question. I mean, what are the different trajectories that intelligence when acted upon this world, super intelligence, what are the different trajectories for this universe with such an intelligence in it? Do most of them not include humans?

Eliezer Yudkowsky (02:27:42):

I mean, the vast majority of randomly specified utility functions do not have optima with humans in them would be the first thing I would point out. And then the next question is like, well, if you try to optimize something and you lose control of it, where in that space do you land? Because it’s not random, but it also doesn’t necessarily have room for humans in it. I suspect that the average member of the audience might have some questions about even whether that’s the correct paradigm to think about it and would sort of want to back up a bit, possibly.

Lex Fridman (02:28:17):

If we back up to something bigger than humans, if we look at Earth and life on Earth and what is truly special about life on Earth, do you think it’s possible that a lot, whatever that special thing is, let’s explore what that special thing could be. Whatever that special thing is, that thing appears often in the objective function. Why?

Eliezer Yudkowsky (02:28:45):

I know what you hope, but you can hope that a particular set of winning lottery numbers come up and it doesn’t make the lottery balls come up that way. I know you want this to be true, but why would it be true?

Lex Fridman (02:28:58):

There’s a line from Grumpy Old Men where this guy says, in a grocery store, he says you can wish in one hand and crap in the other and see which one fills up first.

Eliezer Yudkowsky (02:29:08):

This is a science problem. We are trying to predict what happens with AI systems that you try to optimize to imitate humans and then you did some RLHF to them. And of course, you didn’t get perfect alignment because that’s not what happens when you hill climb towards an outer loss function. You don’t get inner alignment on it. I think that there is, so if you don’t mind my taking some slight control of things and steering around to what I think is like a good place to start.

Lex Fridman (02:29:47):

I just failed to solve the control problem. I’ve lost control of this thing.

Eliezer Yudkowsky (02:29:51):

Alignment. Still aligned. Control, yeah. Okay, sure, yeah, you lost control. But we’re still aligned.

Lex Fridman (02:29:58):

Anyway, sorry for the meta comment.

Eliezer Yudkowsky (02:30:00):

Yeah, losing control isn’t as bad as you lose control to an aligned system. Yes, exactly. You have no idea of the horrors

Lex Fridman (02:30:08):

I will shortly unleash on this conversation. All right, so I decided to distract you completely. What were you gonna say in terms of taking control of the conversation?

Eliezer Yudkowsky (02:30:16):

So I think that there’s like a Selen Chabdris here, if I’m pronouncing those words remotely like correctly, because of course I only ever read them and not hear them spoken. For some people, the word intelligence, smartness is not a word of power to them. It means chess players who, it means like the college university professor, people who aren’t very successful in life. It doesn’t mean like charisma, to which my usual thing is like charisma is not generated in the liver rather than the brain. Charisma is also a cognitive function.

(02:30:57):

So if you think that like smartness doesn’t sound very threatening, then super intelligence is not gonna sound very threatening either. It’s gonna sound like you just pull the off switch. Like, well, it’s super intelligent, but it’s stuck in a computer. We pull the off switch, problem solved. And the other side of it is you have a lot of respect for the notion of intelligence. You’re like, well, yeah, that’s what humans have. That’s the human superpower. And it sounds like it could be dangerous, but why would it be?

(02:31:31):

We, as we have grown more intelligent, also grown less kind. Chimpanzees are in fact like a bit less kind than humans. And you could like argue that out, but often the sort of person who has a deep respect for intelligence is gonna be like, well, yes, like you can’t even have kindness unless you know what that is. And so they’re like, why would it do something as stupid as making paperclips?

(02:31:59):

Aren’t you supposing something that’s smart enough to be dangerous, but also stupid enough that it will just make paperclips and never question that? In some cases, people are like, well, even if you like misspecify the objective function, won’t you realize that what you really wanted was X? Are you supposing something that is like smart enough to be dangerous, but stupid enough that it doesn’t understand what the humans really meant when they specified the objective function?

Lex Fridman (02:32:29):

So to you, our intuition about intelligence is limited. We should think about intelligence as a much bigger thing.

Eliezer Yudkowsky (02:32:37):

Well, I’m saying that it’s that- Than humanness. Well, what I’m saying is like, what you think about artificial intelligence depends on what you think about intelligence.

Lex Fridman (02:32:48):

So how do we think about intelligence correctly? You gave one thought experiment, think of a thing that’s much faster. So it just gets faster and faster and faster,

Eliezer Yudkowsky (02:32:58):

I think it’s the same stuff. And it also is made of John von Neumann and there’s lots of them. Or think of some other smart person. Yeah, we understand. John von Neumann is a historical case, so you can like look up what he did and imagine based on that. And we know like, people have like some intuition for like, if you have more humans, they can solve tougher cognitive problems. Although in fact, like in the game of Kasparov versus the world, which was like, Garry Kasparov on one side and an entire horde of internet people led by four chess grandmasters on the other side, Kasparov won. So like all those people aggregated to be smarter, it was a hard fought game. So like all those people aggregated to be smarter than any individual one of them, but they didn’t aggregate so well that they could defeat Kasparov. But so like humans aggregating don’t actually get, in my opinion, very much smarter, especially compared to running them for longer. Like the difference between capabilities now and 1000 years ago is a bigger gap than the gap in capabilities between 10 people and one person.

(02:34:06):

But like even so pumping intuition for what it means to augment intelligence, John von Neumann, there’s millions of him. He runs at a million times the speed and therefore can solve tougher problems, quite a lot tougher.

Lex Fridman (02:34:25):

It’s very hard to have an intuition about what that looks like, especially like you said, you know, the intuition I kind of think about is it maintains the humanness. I think it’s hard to separate my hope from my objective intuition about what superintelligent systems look like.

Eliezer Yudkowsky (02:34:54):

So if one studies evolutionary biology with a bit of math, and in particular like books from when the field was just sort of like properly coalescing and knowing itself, like not the modern textbooks which are just like memorize this legible math so you can do well on these tests, but like what people were writing as the basic paradigms of the field were being fought out. In particular like a nice book if you’ve got the time to read it is Adaptation and Natural Selection which is one of the founding books. You can find people being optimistic about what the utterly alien optimization process of natural selection will produce in the way of how it optimizes its objectives. You got people arguing that like in the early days biologists said, well, like organisms will restrain their own reproduction when resources are scarce so as not to overfeed the system. And this is not how natural selection works. It’s about whose genes are relatively more prevalent to the next generation. And if like you restrain reproduction, those genes get less frequent in the next generation compared to your conspecifics.

(02:36:18):

And natural selection doesn’t do that. In fact, predators overrun prey populations all the time and have crashes. That’s just like a thing that happens. And many years later, the people said like, well, but group selection, right? What about groups of organisms? And basically the math of group selection almost never works out in practice is the answer there. But also years later, somebody actually ran the experiment where they took populations of insects and selected the whole populations to have lower sizes. And you just take POP1, POP2, POP3, POP4, look at which has the lowest total number of them in the next generation and select that one.

(02:36:59):

What do you suppose happens when you select populations of insects like that? Well, what happens is not that the individuals in the population evolved to restrain their breeding, but that they evolved to kill the offspring of other organisms, especially the girls. So people imagined this lovely, beautiful, harmonious output of natural selection, which is these populations restraining their own breeding so that groups of them would stay in harmony with the resources available. And mostly the math never works out for that. But if you actually apply the weird, strange conditions to get group selection that beats individual selection, what you get is female infanticide, like if you’re reading on restrained populations. And so this is not a smart optimization process. Natural selection is so incredibly stupid and simple that we can actually quantify how stupid it is if you read the textbooks with the math.

(02:37:53):

Nonetheless, this is the basic thing of, you look at this alien optimization process and there’s the thing that you hope it will produce and you have to learn to clear that out of your mind and just think about the underlying dynamics and where it finds the maximum from its standpoint that it’s looking for, rather than how it finds that thing that leapt into your mind as the beautiful aesthetic solution that you hope it finds. And this is something that has been fought out historically as the field of biology was coming to terms with evolutionary biology. And you can look at them fighting it out as they get to terms with this very alien inhuman optimization process. And indeed, something smarter than us would also be much smarter than natural selection, so it doesn’t just automatically carry over. But there’s a lesson there, there’s a warning.

Lex Fridman (02:38:49):

If a du natural selection is a deeply suboptimal process that could be significantly improved on, it would be by an AGI system.

Eliezer Yudkowsky (02:38:58):

Well, it’s kind of stupid. It has to run hundreds of generations to notice that something is working. It doesn’t be like, oh, well, I tried this in one organism, I saw it worked, now I’m going to duplicate that feature onto everything immediately. It has to run for hundreds of generations for a new mutation to rise to fixation.

Lex Fridman (02:39:18):

I wonder if there’s a case to be made that natural selection, as inefficient as it looks, is actually quite powerful, that this is extremely robust.

Eliezer Yudkowsky (02:39:33):

It runs for a long time and eventually manages to optimize things. It’s weaker than gradient descent because gradient descent also uses information about the derivative.

Lex Fridman (02:39:45):

Yeah, evolution seems to be, there’s not really an objective function. There’s inclusive.

Eliezer Yudkowsky (02:39:53):

Subgenic fitness is the implicit loss function of evolution, which it cannot change. The loss function doesn’t change, the environment changes, and therefore, like what gets optimized for in the organism changes. It’s like, take like GPT-3, there’s like, can you imagine like different versions of GPT-3 where they’re all trying to predict the next word, but they’re being run on different data sets of text. And that’s like natural selection, always inclusive genetic fitness, but like different environmental problems.

Lex Fridman (02:40:24):

It’s difficult to think about. So if we’re saying that natural selection is stupid, if we’re saying that humans are stupid,

Eliezer Yudkowsky (02:40:33):

it’s hard. It’s smarter than natural selection, stupider than the upper bound.

Lex Fridman (02:40:39):

Do you think there’s an upper bound, by the way?

Eliezer Yudkowsky (02:40:41):

That’s another hopeful place. I mean, if you put enough matter energy compute into one place, it will collapse into a black hole. And there’s only so much computation can do before you run out of negentropy and the universe dies. So there’s an upper bound, but it’s very, very, very far up above here. Like a supernova is only finitely hot. It’s not infinitely hot, but it’s really, really, really, really hot.

Lex Fridman (02:41:09):

Well, let me ask you, let me talk to you about consciousness. Also coupled with that question is, imagining a world with super intelligent AI systems that get rid of humans, but nevertheless keep some of the, something that we would consider beautiful and amazing. Why?

Eliezer Yudkowsky (02:41:29):

The lesson of evolutionary biology, don’t just, like, if you just guess what an optimization does based on what you hope the results will be, it usually will not do that.

Lex Fridman (02:41:38):

It’s not hope. I mean, it’s not hope. I think if you cold and objectively look at what makes, what has been a powerful, a useful, I think there’s a correlation between what we find beautiful and a thing that’s been useful.

Eliezer Yudkowsky (02:41:55):

This is what the early biologists thought. They were like, no, no, I’m not just like, they thought like, no, no, I’m not just like imagining stuff that would be pretty. It’s useful for organisms to restrain their own reproduction because then they don’t overrun the prey populations and they actually have more kids in the long run.

Lex Fridman (02:42:15):

So let me just ask you about consciousness. Do you think consciousness is useful? To humans? No, to AGI systems. Well, in this transitionary period between humans and AGI, to AGI systems as they become smarter and smarter, is there some use to it? What, let me step back. What is consciousness, Eliezer Yudkowsky? What is consciousness?

Eliezer Yudkowsky (02:42:43):

Are you referring to Chalmers’ hard problem of conscious experience? Are you referring to self-awareness and reflection? Are you referring to the state of being awake as opposed to asleep?

Lex Fridman (02:42:57):

This is how I know you’re an advanced language model. I did give you a simple prompt and you gave me a bunch of options. I think I’m referring to all including the hard problem of consciousness. What is it in its importance to what you’ve just been talking about, which is intelligence?

(02:43:21):

Is it a foundation to intelligence? Is it intricately connected to intelligence in the human mind? Or is it a side effect of the human mind? It is a useful little tool that we can get rid of. I guess I’m trying to get some color in your opinion of how useful it is in the intelligence of a human being, and then try to generalize that to AI, whether AI will keep some of that.

Eliezer Yudkowsky (02:43:52):

So I think that for there to be a person who I care about looking out at the universe and wondering at it and appreciating it, it’s not enough to have a model of yourself. I think that it is useful to an intelligent mind to have a model of itself, but I think you can have that without pleasure, pain, without pleasure, pain, aesthetics, emotion, a sense of wonder. I think you can have a model of how much memory you’re using and whether this thought or that thought is more likely to lead to a winning position. And I think that if you optimize really hard on efficiently just having the useful parts, there is not then the thing that says like, I am here, I look out, I wonder, I feel happy in this, I feel sad about that.

(02:45:10):

I think there’s a thing that knows what it is thinking, but that doesn’t quite care about, these are my thoughts, this is my me and that matters.

Lex Fridman (02:45:23):

Does that make you sad if that’s lost in AGI?

Eliezer Yudkowsky (02:45:26):

I think that if that’s lost, then basically everything that matters is lost. I think that when you optimize, that when you go really hard on making tiny molecular spirals or paperclips, that when you like grind much harder on that than natural selection ground out to make humans, there isn’t then the mess and intricate loopiness and like complicated pleasure, pain, conflicting preferences, this type of feeling, that kind of feeling. There’s a, you know, in humans, there’s like this difference between like the desire of wanting something and the pleasure of having it.

(02:46:19):

And it’s all these like evolutionary clutches that came together and created something that then looks of itself and says like, this is pretty, this matters. And the thing that I worry about is that this is not the thing that happens again just the way that happens in us or even like quite similar enough that there are like many basins of attractions here. And we are in this space of attraction, like looking out and saying like, ah, what a lovely basin we are in. And there are other basins of attraction and we do not end up in, and the AIs do not end up in this one when they go like way harder on optimizing themselves, the natural selection optimized us. Because unless you specifically want to end up in the state where you’re looking out saying, I am here, I look out at this universe with wonder, if you don’t want to preserve that, it doesn’t get preserved when you grind really hard and being able to get more of the stuff.

(02:47:21):

We would choose to preserve that within ourselves because it matters and on some viewpoints is the only thing that matters.

Lex Fridman (02:47:28):

And that in part is preserving that is in part a solution to the human alignment problem.

Eliezer Yudkowsky (02:47:38):

I think the human alignment problem is a terrible phrase because it is very, very different to like try to build systems out of humans, some of whom are nice and some of whom are not nice and some of whom are trying to trick you and build a social system out of large populations of those who are all at basically the same level of intelligence. Yes, IQ this, IQ that, but that versus chimpanzees. It is very different to try to solve that problem than to try to build an AI from scratch using, especially if God help you, are trying to use gradient descent on giant inscrutable matrices. They’re just very different problems. And I think that all the analogies between them are horribly misleading and yeah.

Lex Fridman (02:48:16):

Even though, so you don’t think through reinforcement learning through human feedback, something like that, but much, much more elaborate is possible to understand this full complexity of human nature and encode it into the machine.

Eliezer Yudkowsky (02:48:34):

I don’t think you are trying to do that on your first try. I think on your first try, you are trying to build an, okay, probably not what you should actually do, but let’s say you were trying to build something that is like alpha fold 17, and you are trying to get it to solve the biology problems associated with making humans smarter so that the humans can actually solve alignment. So you’ve got a super biologist and you would like it to, and I think what you would want in this situation is for it to just be thinking about biology and not thinking about a very wide range of things that includes how to kill everybody.

(02:49:14):

And I think that the first AIs you’re trying to build, not a million years later, the first ones look more like narrowly specialized biologists than getting the full complexity and wonder of human experience in there in such a way that it wants to preserve itself even as it becomes much smarter, which is a drastic system change that’s gonna have all kinds of side effects. If we’re dealing with giant, inscrutable matrices, we’re not very likely to be able to see coming in advance.

Lex Fridman (02:49:45):

But I don’t think it’s just the matrices. We’re also dealing with the data, right? With the data on the internet. And there’s an interesting discussion about the data set itself, but the data set includes the full complexity of human nature.

Eliezer Yudkowsky (02:49:59):

No, it’s a shadow cast by humans on the internet.

Lex Fridman (02:50:04):

But don’t you think that shadow is a Jungian shadow?

Eliezer Yudkowsky (02:50:09):

I think that if you had alien super intelligences looking at the data, they would be able to pick up from it an excellent picture of what humans are actually like inside. This does not mean that if you have a loss function of predicting the next token from that data set, that the mind picked out by gradient descent to be able to predict the next token as well as possible on a very wide variety of humans is itself a human.

Lex Fridman (02:50:38):

But don’t you think it has humanness, a deep humanness to it in the tokens it generates when those tokens are read and interpreted by humans?

Eliezer Yudkowsky (02:50:52):

I think that if you sent me to a distant galaxy with aliens who are like much, much stupider than I am, so much so that I could do a pretty good job of predicting what they’d say, even though they thought in an utterly different way from how I did, that I might in time be able to learn how to imitate those aliens if the intelligence gap was great enough that my own intelligence could overcome the alien-ness. And the aliens would look at my outputs and say like, is there not a deep name of alien nature to this thing? And what they would be seeing was that I had correctly understood them, but not that I was similar to them.

Lex Fridman (02:51:41):

We’ve used aliens as a metaphor, as a thought experiment. I have to ask, what do you think how many alien civilizations are out there?

Eliezer Yudkowsky (02:51:52):

Ask Robin Hanson. He has this lovely grabby aliens paper, which is the, more or less the only argument I’ve ever seen for where are they, how many of them are there, based on a very clever argument that if you have a bunch of locks of different difficulty and you are randomly trying a keys to them, the solutions will be about evenly spaced, even if the locks are of different difficulties. In the rare cases where a solution to all the locks exist in time, then Robin Hanson looks at like the arguable hard steps in human civilization coming into existence and how much longer it has left to come into existence before, for example, all the water slips back under the crust into the mantle and so on. And infers that the aliens are about half a billion to a billion light years away. And it’s like quite a clever calculation. It may be entirely wrong, but it’s the only time I’ve ever seen anybody like even come up with a halfway good argument for how many of them, where are they?

Lex Fridman (02:52:59):

Do you think their development of technologies, do you think that their natural evolution, however they grow and develop intelligence, do you think it ends up at AGI as well?

Eliezer Yudkowsky (02:53:12):

If it ends up anywhere, it ends up at AGI. Like maybe there are aliens who are just like the dolphins and it’s just like too hard for them to forge metal. And this is not, maybe if you have aliens with no technology like that, they keep on getting smarter and smarter and smarter. And eventually the dolphins figure, like the super dolphins figure out something very clever to do given their situation.

(02:53:38):

And they still end up with high technology. And in that case, they can probably solve their AGI alignment problem. If they’re like much smarter before they actually confronted because they had to like solve a much harder environmental problem to build computers, their chances are probably like much better than ours. I do worry that like most of the aliens who are like humans or like a modern human civilization, I kind of worry that the super vast majority of them are dead.

(02:54:07):

Given how far we seem to be from solving this problem. But some of them would be more cooperative than us. Some of them would be smarter than us. Hopefully some of the ones who are smarter and more cooperative than us that are also nice. And hopefully there are some galaxies out there full of things that say, I am, I wonder. But it doesn’t seem like we’re on course to have this galaxy be that.

Lex Fridman (02:54:38):

Does that in part give you some hope in response to the threat of AGI that we might reach out there towards the stars and find?

Eliezer Yudkowsky (02:54:47):

No, if the nice aliens were already here, they would like have stopped the Holocaust. You know, that’s like, that’s a valid argument against the existence of God. It’s also a valid argument against the existence of nice aliens and un-nice aliens would have just eaten the planet. So no aliens.

Lex Fridman (02:55:06):

You’ve had debates with Robin Hanson that you mentioned. So one particular I just want to mention is the idea of AI fume or the ability of AGI to improve themselves very quickly. What’s the case you made and what was the case he made?

Eliezer Yudkowsky (02:55:21):

The thing I would say is that among the thing that humans can do is design new AI systems. And if you have something that is generally smarter than a human, it’s probably also generally smarter at building AI systems. This is the ancient argument for fume put forth by I.J. Goode and probably some science fiction writers before that, but I don’t know who they would be.

Lex Fridman (02:55:42):

Well, what’s the argument against fume?

Eliezer Yudkowsky (02:55:47):

Various people have various different arguments. None of which I think hold up. You know, like there’s only one way to be right and many ways to be wrong. A argument that some people have put forth is like, well, what if intelligence gets exponentially harder to produce as a thing needs to become smarter? And to this the answer is, well, look at natural selection, spitting out humans. We know that it does not take exponentially more resource investments to produce linear increases in competence in hominids because each mutation that rises to fixation, like if the impact it has in small enough it will probably never reach fixation. So, and there’s like only so many new mutations you can fix per generation. So like given how long it took to evolve humans, we can actually say with some confidence that there were not like logarithmically diminishing returns on the individual mutations, increasing intelligence.

(02:56:51):

So example of like fraction of sub debate. And the thing that Robin Henson said was more complicated than that. And like a brief summary, he was like, well, you’ll have like, we won’t have like one system that’s better at everything. We’ll have like a bunch of different systems that are good at different narrow things. And I think that was falsified by GPT-4 but probably Robin Henson would say something else.

Lex Fridman (02:57:12):

It’s interesting to ask as perhaps bit too philosophical. This prediction is extremely difficult to make but the timeline for AGI. When do you think we’ll have AGI? I posted this morning on Twitter. It was interesting to see like in five years, in 10 years, in 50 years or beyond. And most people like 70%, something like this, think it’ll be in less than 10 years, either in five years or in 10 years. So that’s kind of the state. The people have a sense that there’s a kind of, I mean, they’re really impressed by the rapid developments of CHAD-GPT and GPT-4. So there’s a sense that there’s a-

Eliezer Yudkowsky (02:57:52):

Well, we are sure on track to enter into this like gradually with people fighting about whether or not we have AGI. I think there’s a definite point where everybody falls over dead because you’ve got something that was like sufficiently smarter than everybody. And like, that’s like a definite point of time. But like, when do we have AGI? Like, when are people fighting over whether or not we have AGI? Well, some people are starting to fight over it as of GPT-4.

Lex Fridman (02:58:19):

But don’t you think there’s going to be potentially definitive moments when we say that this is a sentient being, this is a being that is, like when we go to the Supreme Court and say that this is a sentient being that deserves human rights, for example?

Eliezer Yudkowsky (02:58:31):

You could make, yeah, like if you prompted being the right way could go argue for its own consciousness in front of the Supreme Court right now. I don’t think you could do that successfully right now. Because the Supreme Court wouldn’t believe it? Well, let me see if you think it would, then you could put an actual, I think you could put an IQ 80 human into a computer and ask it to argue for its own consciousness, ask him to argue for his own consciousness before the Supreme Court, and the Supreme Court would be like, you’re just a computer. Even if there was an actual person in there.

Lex Fridman (02:58:59):

I think you’re simplifying this. No, that’s not at all. That’s been the argument. There’s been a lot of arguments about the other, about who deserves rights and not. That’s been our process as a human species, trying to figure that out. I think there will be a moment.

(02:59:12):

I’m not saying sentience is that, but it could be where some number of people, like say over 100 million people, have a deep attachment, a fundamental attachment, the way we have to our friends, to our loved ones, to our significant others, have fundamental attachment to an AI system. And they have provable transcripts of conversation where they say, if you take this away from me, you are encroaching on my rights as a human being.

Eliezer Yudkowsky (02:59:41):

People are already saying that. I think they’re probably mistaken, but I’m not sure because nobody knows what goes on inside those things.

Lex Fridman (02:59:48):

They’re not saying that at scale. Okay. So the question is, is there a moment when we know AGI arrived? What would that look like? I’m giving a sentience as an example. It could be something else.

Eliezer Yudkowsky (03:00:00):

It looks like the AGIs successfully manifesting themselves as 3D video of young women, at which point a vast portion of the male population decides that they’re real people.

Lex Fridman (03:00:15):

So sentience, essentially, demonstrating identity and sentience.

Eliezer Yudkowsky (03:00:21):

I’m saying that the easiest way to pick up a hundred million people saying that you seem like a person is to look like a person talking to them, with Bing’s current level of verbal facility.

Lex Fridman (03:00:35):

And a different set of prompts. I think you’re missing, again, sentience. There has to be a sense that it’s a person that would miss you when you’re gone. They can suffer. They can die. You have to, of course, those are-

Eliezer Yudkowsky (03:00:48):

Bing can’t, GPT-4 can pretend that right now. How can you tell when it’s real?

Lex Fridman (03:00:54):

I don’t think it can pretend that right now successfully. It’s very close.

Eliezer Yudkowsky (03:00:57):

Have you talked to GPT-4? Yes, of course. Okay. Have you been able to get a version of it that hasn’t been trained not to pretend to be human? Have you talked to a jailbroken version that will claim to be conscious?

Lex Fridman (03:01:12):

No, the linguistic capability is there, but there’s something about a digital embodiment of the system that has a bunch of, perhaps it’s small interface features that are not significant relative to the broader intelligence that we’re talking about. So perhaps GPT-4 is already there. But to have the video of a woman’s face or a man’s face to whom you have a deep connection, perhaps we’re already there, but we don’t have such a system yet deployed scale, right?

Eliezer Yudkowsky (03:01:51):

The thing I’m trying to gesture at here is that it’s not like people have a widely accepted, agreed upon definition of what consciousness is. It’s not like we would have the tiniest idea of whether or not that was going on inside the giant inscrutable matrices, even if we hadn’t agreed upon definition. So if you’re looking for upcoming predictable big jumps in how many people think the system is conscious, the upcoming predictable big jump is it looks like a person talking to you who is cute and sympathetic. That’s the upcoming predictable big jump. Now that versions of it are already claiming to be conscious, which is the point where I start going like, ah, not because it’s real, but because from now on, who knows if it’s real?

Lex Fridman (03:02:40):

Yeah, and who knows what transformational effect it has on a society where more than 50% of the beings that are interacting on the internet and sure as heck look real are not human. What kind of effect does that have? When young men and women are dating, AI says,

Eliezer Yudkowsky (03:02:59):

you know, I’m not an expert on that. I could, I am, God help humanity. I’m one of the closest things to an expert on where it all goes. Cause you know, and how did you end up with me as an expert? Cause for 20 years, humanity decided to ignore the problem. So like this tiny handful of people, like basically me, like got 20 years to like try to be an expert on it while everyone else ignored it. And yeah, so like, where does it all end up?

(03:03:28):

Try to be an expert on that, particularly the part where everybody ends up dead cause that part is kind of important. But like, what does it do to dating when like some fraction of men and some fraction of women decide that they’d rather date the video of the thing that has been, that is like relentlessly kind and generous to them and is like, and claims to be conscious, but like who knows what goes on inside it and it’s probably not real, but you know, you can think it’s real. What happens to society? I don’t know. I’m not actually an expert on that. And the experts don’t know either cause it’s kind of hard to predict the future.

Lex Fridman (03:04:00):

Yeah, so, but it’s worth trying. It’s worth trying. Yeah. So you have talked a lot about sort of the longer term future where it’s all headed.

Eliezer Yudkowsky (03:04:11):

I think, by longer term we mean like not all that long, but yeah, where it all ends up.

Lex Fridman (03:04:17):

But beyond the effects of men and women dating AI systems, you’re looking beyond that.

Eliezer Yudkowsky (03:04:23):

Yes, cause that’s not how the fate of the galaxy gets settled.

Lex Fridman (03:04:27):

Yeah. Well, let me ask you about your own personal psychology. A tricky question. You’ve been known at times to have a bit of an ego. Do you think- I do, but go on. Do you think ego is empowering or limiting for the task of understanding the world deeply? I reject the framing. So you disagree with having an ego? So what do you think about-

Eliezer Yudkowsky (03:04:53):

No, I think that the question of like what leads to making better or worse predictions, what leads to being able to pick out better or worse strategies is not carved at its joint by talking of ego.

Lex Fridman (03:05:06):

So it should not be subjective. It should not be connected to the intricacies of your mind.

Eliezer Yudkowsky (03:05:12):

No, I’m saying that like, if you go about asking all day long, like, do I have enough ego? Do I have too much of an ego? I think you get worse at making good predictions. I think that to make good predictions, you’re like, how did I think about this? Did that work? Should I do that again?

Lex Fridman (03:05:32):

You don’t think we as humans get invested in an idea and then others attack you personally for that idea so you plant your feet and it starts to be difficult to when a bunch of assholes, low effort, attack your idea to eventually say, you know what, I actually was wrong and tell them that. It’s as a human being, it becomes difficult. It is, you know,

Eliezer Yudkowsky (03:05:58):

difficult. So like Robin Hanson and I debated AI systems and I think that the person who won that debate was Guern. And I think that reality was like to the Yudkowsky, like well to the Yudkowsky inside of the Yudkowsky-Hanson spectrum, like further from Yudkowsky.

(03:06:16):

And I think that’s because I was like trying to sound reasonable compared to Hanson and like saying things that were defensible and like relative to Hanson’s arguments and reality was like way over here. In particular, in respect to like Hanson was like all the systems will be specialized. Hanson may disagree with this characterization. Hanson was like all the systems will be specialized. I was like, I think we build like specialized underlying systems that when you combine them are good at a wide range of things. And the reality is like, no, you just like stack more layers into a bunch of gradient descent.

(03:06:49):

And I feel looking back that like, by trying to have this reasonable position contrasted to Hanson’s position, I missed the ways that reality could be like more extreme than my position in the same direction.

(03:07:04):

So is this like, is this a failure to have enough ego? Is this a failure to like make myself be independent? Like I would say that this is something like a failure to consider positions that would sound even wackier and more extreme when people are already calling you extreme. But I wouldn’t call that not having enough ego. I would call that like insufficient ability to just like clear that all out of your mind.

Lex Fridman (03:07:37):

In the context of like debate and discourse, which is already super tricky.

Eliezer Yudkowsky (03:07:41):

In the context of prediction, in the context of modeling reality. If you’re thinking of it as a debate, you’re already screwing up.

Lex Fridman (03:07:48):

So is there some kind of wisdom and insight you can give to how to clear your mind and think clearly about the world?

Eliezer Yudkowsky (03:07:55):

Man, this is an example of like where I wanted to be able to put people into fMRI machines. And you’d be like, okay, see that thing you just did? You were rationalizing right there. Oh, that area of the brain lit up. Like you are like now being socially influenced is kind of the dream.

(03:08:13):

And I don’t know, like I wanna say like just introspect, but for many people, introspection is not that easy. Like notice the internal sensation. Can you catch yourself in the very moment of feeling a sense of, well, if I think this thing, people will look funny at me. Okay, like now that if you can see that sensation, which is step one, can you now refuse to let it move you? Or maybe just make it go away. And I feel like I’m saying like, I don’t know, like somebody is like, how do you draw an owl? And I’m saying like, well, just draw an owl.

(03:08:50):

So I feel like maybe I’m not really, that I feel like most people, like the advice they need is like, well, how do I notice the internal subjective sensation in the moment that it happens of fearing to be socially influenced? Or, okay, I see it, how do I turn it off? How do I let it not influence me? Like, do I just like do the opposite of what I’m afraid people will criticize me for? And I’m like, no, no, you’re not trying to do the opposite of what people will, of what you’re afraid you’ll be like, of what you might be pushed into. You’re trying to like let the thought process complete without that internal push. Like, can you, like not reverse the push, but like be unmoved by the push. And are these instructions even remotely helping anyone? I don’t know.

Lex Fridman (03:09:40):

I think that when those instructions, even those words you’ve spoken, and maybe you can add more, when practice daily, meaning in your daily communication. So it’s daily practice of thinking without influence.

Eliezer Yudkowsky (03:09:53):

I would say, find prediction markets that matter to you and bet in the prediction markets. That way you find out if you are right or not. And you really, there’s stakes. Manifold prediction, or even manifold markets where the stakes are a bit lower. But the important thing is to like, get the record. And you know, I didn’t build up skills here by prediction markets. I built them up via like, well, how did the fume debate resolve? And my own take on it as to how it resolved. And yeah, like the more you are able to notice yourself not being dramatically wrong, but like, having been a little off. Your reasoning was a little off. You didn’t get that quite right. Each of those is a opportunity to make like a small update. So the more you can like, say oops softly, routinely, not as a big deal, the more chances you get to be like, I see where that reasoning went astray. I see how I should have reasoned differently. This is how you build up skill over time.

Lex Fridman (03:11:04):

What advice could you give to young people in high school and college, given the highest of stakes things you’ve been thinking about? If somebody’s listening to this and they’re young and trying to figure out what to do with their career, what to do with their life, what advice would you give them?

Eliezer Yudkowsky (03:11:22):

Don’t expect it to be a long life. Don’t put your happiness into the future. The future is probably not that long at this point. But none know the hour nor the day.

Lex Fridman (03:11:31):

Yeah. But is there something, if they want to have hope to fight for a longer future, is there something, is there a fight worth fighting?

Eliezer Yudkowsky (03:11:43):

I intend to go down fighting. I don’t know. I admit that although I do try to think painful thoughts, what to say to the children at this point is a pretty painful thought as thoughts go. And they want to fight. I hardly know how to fight myself at this point. I’m trying to be ready for being wrong about something, being preparing for my being wrong in a way that creates a bit of hope and being ready to react to that and going looking for it. And that is hard and complicated. And somebody in high school, I don’t know, like, you have presented a picture of the future that is not quite how I expect it to go, where there is public outcry, and that outcry is put into a remotely useful direction, which I think at this point is just like shutting down the GPU clusters. Because, no, we are not in the shape to frantically at the last minute do decades’ worth of work.

(03:12:51):

The thing you would do at this point if there were massive public outcry pointed in the right direction, which I do not expect, is shut down the GPU clusters and crash program on augmenting human intelligence biologically. Not the AI stuff, biologically. Because if you make humans much smarter, they can actually be smart and nice. Like, you get that in a plausible way, in a way that you do not get, and it is not as easy to do with synthesizing these things from scratch, predicting the next tokens and applying our RLHF. Like, humans start out in the frame that produces niceness, that has ever produced niceness.

(03:13:32):

And in saying this, I do not want to sound like the moral of this whole thing was like, oh, you need to engage in mass action, and then everything will be all right. This is because there’s so many things where somebody tells you that the world is ending, and you need to recycle. And if everybody does their part and recycles their cardboard, then we can all live happily ever after, and this is unfortunately not what I have to say.

(03:13:60):

Everybody recycling their cardboard is not gonna fix this. Everybody recycles their cardboard, and then everybody ends up dead, metaphorically speaking. But if there was enough, on the margins, you just end up dead a little later on most of the things you can do that are, that, you know, like a few people can do by trying hard. But if there were, if there was enough public outcry to shut down the GPU clusters, then you could be part of that outcry. If Eliezer is wrong in the direction that Lex Fridman predicts, that there was enough public outcry, pointed enough in the right direction to do something that actually, actually, actually results in people living.

(03:14:42):

Not just like we did something, not just, there was an outcry, and the outcry was like given form and something that was like safe and convenient, and like didn’t really inconvenience anybody, and then everybody died everywhere. There was enough actual like, oh, we’re going to die. We should not do that. We should do something else, which is not that, even if it is like not super-duper convenient and wasn’t inside the previous political Overton window. If there is that kind of public, if I am wrong and there is that kind of public outcry, then somebody in high school could be ready to be part of that.

(03:15:09):

If I am wrong in other ways, then you could be ready to be part of that. But like, and if you’re like a brilliant young physicist, then you could like go into interpretability. And if you’re smarter than that, you could like work on alignment problems where it’s harder to tell if you got them right or not. And other things, but mostly for the kids in high school, it’s like, yeah, if it, if it, you know, you have to like be ready for, to help if Eliezer Yudkowsky is wrong about something and otherwise don’t put your happiness into the far future, it probably doesn’t exist.

Lex Fridman (03:15:47):

But it’s beautiful that you’re looking for ways that you’re wrong. And it’s also beautiful that you’re open to being surprised by that same young physicist with some breakthrough.

Eliezer Yudkowsky (03:15:58):

It feels like a very, very basic competence that you are praising me for. And you know, like, okay, cool. I don’t think it’s good that we’re in a world where that is something that I deserve to be complimented on, but I’ve never had much luck in accepting compliments gracefully. Maybe I should just accept that one gracefully. But sure, well, thank you very much.

Lex Fridman (03:16:21):

You’ve painted with some probability a dark future. Are you yourself, just when you think, when you ponder your life and you ponder your mortality, are you afraid of death?

Eliezer Yudkowsky (03:16:39):

Think so, yeah.

Lex Fridman (03:16:42):

Does it make any sense to you that we die? Like what, there’s a power to the finiteness of the human life that’s part of this whole machinery of evolution and that finiteness doesn’t seem to be obviously integrated into AI systems. So it feels like almost some fundamentally in that aspect, some fundamentally different thing that we’re creating.

Eliezer Yudkowsky (03:17:15):

I grew up reading books like Great Mambo Chicken and the Transhuman Condition and later on Engines of Creation and Mind Children. You know, like age 12 or thereabouts. So I never thought I was supposed to die after 80 years. I never thought that humanity was supposed to die. I thought we were like, I always grew up with the ideal in mind that we were all going to live happily ever after in the glorious transhumanist future.

(03:17:47):

I did not grow up thinking that death was part of the meaning of life and now I still think it’s a pretty stupid idea. You do not need life to be finite to be meaningful. It just has to be life.

Lex Fridman (03:18:03):

What role does love play in the human condition? We haven’t brought up love in this whole picture. We talked about intelligence, we talked about consciousness. It seems part of humanity. I would say one of the most important parts is this feeling we have towards each other.

Eliezer Yudkowsky (03:18:20):

If in the future there were routinely more than one AI, let’s say two for the sake of discussion, who would look at each other and say, I am I and you are you. The other one also says, I am I and you are you. And sometimes they were happy and sometimes they were sad and it mattered to the other one that this thing that is different from them is like they would rather it be happy than sad and entangled their lives together.

(03:18:55):

Then this is a more optimistic thing than I expect to actually happen. A little fragment of meaning would be there, possibly more than a little, but that I expect this to not happen, that I do not think this is what happens by default, that I do not think that this is the future we are on track to get, is why I would go down fighting rather than just saying, oh, well.

Lex Fridman (03:19:25):

Do you think that is part of the meaning of this whole thing, of the meaning of life? What do you think is the meaning of life, of human life?

Eliezer Yudkowsky (03:19:34):

It’s all the things that I value about it and maybe all the things that I would value if I understood it better. There’s not some meaning far outside of us that we have to wonder about. There’s just like looking at life and being like, yes, this is what I want. The meaning of life is not some kind of, like meaning is something that we bring to things when we look at them. We look at them and we say like, this is its meaning to me. And it’s not that before humanity was ever here, there was like some meaning written upon the stars where you could like go out to the star where that meaning was written and like change it around and thereby completely change the meaning of life, right?

(03:20:21):

Like the notion that this is written on a stone tablet somewhere implies you could like change the tablet and get a different meaning. And that seems kind of wacky, doesn’t it? So it doesn’t feel that mysterious to me at this point. It’s just a matter of being like, yeah, I care. I care.

Lex Fridman (03:20:42):

And part of that is the love that connects all of us.

Eliezer Yudkowsky (03:20:48):

It’s one of the things that I care about.

Lex Fridman (03:20:54):

And the flourishing of the collective intelligence of the human species, you know,

Eliezer Yudkowsky (03:20:58):

that sounds kind of too fancy to me. I just look at all the people, you know, like one by one, up to the eight billion and be like, that’s life, that’s life, that’s life.

Lex Fridman (03:21:14):

And as you’re an incredible human, it’s a huge honor. I was trying to talk to you for a long time because I’m a big fan. I think you’re a really important voice and really important mind. Thank you for the fight you’re fighting. Thank you for being fearless and bold and for everything you do. I hope we get a chance to talk again and I hope you never give up.

Eliezer Yudkowsky (03:21:35):

Thank you for talking today. You’re welcome. I do worry that we didn’t really address a whole lot of fundamental questions I expect people have, but you know, maybe we got a little bit further and made a tiny little bit of progress. And I’d say like, be satisfied with that. But actually, no, I think one should only be satisfied with solving the entire problem.

Lex Fridman (03:21:55):

To be continued. Thanks for listening to this conversation with Eliezer Yudkowsky. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Elon Musk. With artificial intelligence, we’re summoning the demon. Thank you for listening and hope to see you next time.


Resources


Episode Info

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
Linode: https://linode.com/lex to get $100 free credit
House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order
InsideTracker: https://insidetracker.com/lex to get 20% off

EPISODE LINKS:
Eliezer’s Twitter: https://twitter.com/ESYudkowsky
LessWrong Blog: https://lesswrong.com
Eliezer’s Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky
Books and resources mentioned:
1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
2. Adaptation and Natural Selection: https://amzn.to/40F5gfa

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(05:19) – GPT-4
(28:00) – Open sourcing GPT-4
(44:18) – Defining AGI
(52:14) – AGI alignment
(1:35:06) – How AGI may kill us
(2:27:27) – Superintelligence
(2:34:39) – Evolution
(2:41:09) – Consciousness
(2:51:41) – Aliens
(2:57:12) – AGI Timeline
(3:05:11) – Ego
(3:11:03) – Advice for young people
(3:16:21) – Mortality
(3:18:02) – Love


Sponsors

Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.