Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.

You can click the timestamp to jump to that time.

Lex Fridman (00:00):

The following is a conversation with Max Tegmark, his third time on the podcast. In fact, his first appearance was episode number one of this very podcast. He is a physicist and artificial intelligence researcher at MIT, co-founder of Future of Life Institute and author of Life 3.0, Being Human in the Age of Artificial Intelligence. Most recently, he’s a key figure in spearheading the open letter calling for a six month pause on giant AI experiments like training GPT-4. The letter reads, we’re calling for a pause on training of models larger than GPT-4 for six months.


This does not imply a pause or ban on all AI research and development or the use of systems that have already been placed on the market. Our call is specific and addresses a very small pool of actors who possesses this capability. The letter has been signed by over 50,000 individuals, including 1800 CEOs and over 1500 professors. Signatories include Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang and many others.


This is a defining moment in the history of human civilization where the balance of power between human and AI begins to shift. And Max’s mind and his voice is one of the most valuable and powerful in a time like this. His support, his wisdom, his friendship has been a gift I’m forever deeply grateful for. And now a quick few second mention of each sponsor. Check them out in the description. It’s the best way to support this podcast.


We’ve got Notion for project and team collaboration, Insight Tracker for biological data and Indeed for hiring. Choose wisely, my friends. Also, speaking of hiring, if you want to work with our amazing team, we’re always hiring, whether it’s through Indeed or otherwise, go to slash hiring. And now onto the full latteries. As always, no ads in the middle. I try to make these interesting, but if you must skip them, please still check out our sponsors. I enjoy their stuff. Maybe you will too.


This show is brought to you by Notion. I’ve spoken endlessly about how amazing Notion is, how everybody, all the cool kids are recommending it for just basic note-taking, but there’s so, so much more. It’s the collaborative aspect of it, the project management aspect of it, the wikis, the document sharing, all of that, all in a simple, powerful, beautifully designed solution. What can I say? On top of this, there’s the Notion AI tool. This is the best integration of large language models into a productivity note-taking tool.


There are so many amazing features. I mean, it’s just endless. Go to their website. You can generate entire presentations and reports based on a to-do list. You can summarize stuff. You can short stuff. You can generate tables based on the description. You can write a summary. You can expand the text. You can change the style of the text. You can fix spelling and grammar. You can translate. You can use simpler language, more complicated language, change the tone of the voice, make it shorter, longer. Like I said, everything is just so easy to play around with, and all of it, no matter what you’re doing, will challenge you to think how you write.


It will challenge you to expand the style of your writing. It will save you a lot of time, of course, but I just think it makes you a better thinker and productive being in this world, and I think that’s such a great integration of AI into the productivity workflow. To me, it’s not enough for a large language model to be effective at answering questions and having good dialogue. You have to really integrate it into the workflow, and Notion, better than anybody else I’ve seen, has done that.


So if that’s interesting to you, Notion AI helps you work faster, write better, and think bigger doing tasks that normally take you hours and just minutes, try Notion AI for free when you go to slash lex. That’s all lowercase, slash lex, to try the power of Notion AI today. This show is also brought to you by Insight Tracker, a service I use to track biological data.


It’s really good to do that kind of thing regularly, to look at all the different markers in your body and to understand what could be made better through lifestyle, through diet changes. It’s kind of obvious that decisions about your life should be made based on the data that comes from your own body, not some kind of population study, although those are good, not some spiritual guru, although those are good, not some novel, whether it’s Harry Potter or Dostoevsky, although those are sometimes good, not your relative who says, I heard a guy say that a guy does this thing that is very brosouncy sounding, although sometimes it turns out to be pretty effective.


Overall, the best decisions about your life should be based on the things that come from your own body. Insight Tracker uses algorithms to analyze your blood data, DNA data, fitness tracker data, all that kind of stuff to give you recommendations. You should be doing it, you should be doing it regularly, so it’s not just a one-time thing, but regularly over time you see what changes led to improvements in the various markers that come from your body. Get special savings for a limited time when you go to slash lex. This show is also brought to you by Indeed, a hiring website. I think the most important thing in life, not to quote Conan the Barbarian, because that would be very inappropriate to quote at this moment, and it’s not actually accurate at all as a reflection of what’s important in life. It only has comedic value.


What I really want to say about what’s important in life is the people you surround yourself with, and we spend so much of our time in the workplace seeking solutions to very difficult problems together, passionately pursuing ambitious goals, sometimes impossible goals. That is the source of meaning, a source of happiness for people, and I think part of that happiness comes from the collaboration with other human beings, the sort of professional depth of connection that you have with other human beings, of being together through the grind and surviving and accomplishing the goal or failing in a big, epic way, knowing that you have tried together.


Doing that with the right team, I think, is one of the most important things in life, so you should surround yourself with the right team. If you’re looking to join a team, you should be very selective about that, or if you’re looking to hire a team, you should be very selective about that and use the best tools at the job. I’ve used Indeed many, many times throughout my life for the teams I’ve led. Don’t overspend on hiring. Visit slash Lex to start hiring now. That’s slash Lex. Terms and conditions apply. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Max Tegmark.


You were the first ever guest on this podcast, episode number one. So first of all, Max, I just have to say, thank you for giving me a chance. Thank you for starting this journey. It’s been an incredible journey. Just thank you for sitting down with me and just acting like I’m somebody who matters that I’m somebody who’s interesting to talk to. And thank you for doing it. That meant a lot.

Max Tegmark (07:58):

Thanks to you for putting your heart and soul into this. I know when you delve into controversial topics, it’s inevitable to get hit by what Hamlet talks about, the slings and arrows and stuff. And I really admire this. It’s in an era where YouTube videos are too long and now it has to be like a 20-minute TikTok video. And now it has to be like a 20-minute TikTok, 20-second TikTok clip. It’s just so refreshing to see you going exactly against all of the advice and doing these really long-form things. And the people appreciate it. Reality is nuanced. And thanks for sharing it that way.

Lex Fridman (08:33):

So let me ask you again, the first question I’ve ever asked on this podcast. Episode number one, talking to you, do you think there’s intelligent life out there in the universe? Let’s revisit that question. Do you have any updates? What’s your view when you look out to the stars?

Max Tegmark (08:51):

So when we look out to the stars, if you define our universe the way most astrophysicists do, not as all of space, but the spherical region of space that we can see with our telescopes from which light has a time to reach us, since our Big Bang, I’m in the minority. I estimate that we are the only life in this spherical volume that has invented internet radios, gotten our level of tech. And if that’s true, then it puts a lot of responsibility on us to not mess this one up. Because if it’s true, it means that life is quite rare. And we are stewards of this one spark of advanced consciousness, which if we nurture it and help it grow, eventually life can spread from here out into much of our universe. And we can have this just amazing future. Whereas if we instead are reckless with the technology we build and just snuff it out due to stupidity or infighting, then maybe the rest of cosmic history in our universe is just gonna be a play for empty benches. But I do think that we are actually very likely to get visited by aliens, alien intelligence quite soon. But I think we are gonna be building that alien intelligence.

Lex Fridman (10:15):

So we’re going to give birth to an intelligent alien civilization. Unlike anything that human, the evolution here on Earth was able to create in terms of the path, the biological path it took.

Max Tegmark (10:31):

Yeah, and it’s gonna be much more alien than a cat or even the most exotic animal on the planet right now. Because it will not have been created through the usual Darwinian competition where it necessarily cares about self-preservation, afraid of death, any of those things. And the space of alien minds that you can build is just so much vaster than what evolution will give you. And with that also comes a great responsibility for us to make sure that the kind of minds we create are the kind of minds that it’s good to create.


Minds that will share our values and be good for humanity and life. And also create minds that don’t suffer.

Lex Fridman (11:22):

Yeah. Do you try to visualize the full space of alien minds that AI could be? Do you try to consider all the different kinds of intelligences? Sort of generalizing what humans are able to do to the full spectrum, what intelligent creatures, entities could do?

Max Tegmark (11:40):

I try, but I would say I fail. I mean, it’s very difficult for a human mind to really grapple with something so completely alien. I mean, even for us, right? If we just try to imagine, how would it feel if we were completely indifferent towards death or individuality? Even if you just imagine that, for example, you could just copy my knowledge of how to speak Swedish. Boom, now you can speak Swedish. And you could copy any of my cool experiences and then you could delete the ones you didn’t like in your own life, just like that.


It would already change quite a lot about how you feel as a human being, right? You probably spend less effort studying things if you just copy them. And you might be less afraid of death because if the plane you’re on starts to crash, you’d just be like, oh, shucks, I haven’t backed my brain up for four hours. So I’m gonna lose all these wonderful experiences out of this flight.


We might also start feeling more compassionate, maybe with other people, if we can so readily share each other’s experiences and our knowledge and feel more like a hive mind. It’s very hard though. I really feel very humble about this to grapple with it, how it might actually feel. The one thing which is so obvious though, which I think is just really worth reflecting on is because the mind space of possible intelligence is so different from ours, it’s very dangerous if we assume they’re gonna be like us or anything like us.

Lex Fridman (13:28):

Well, there’s the entirety of human written history has been through poetry, through novels, been trying to describe through philosophy, trying to describe the human condition and what’s entailed in it. Like Jessica said, fear of death and all those kinds of things. What is love? And all of that changes if you have a different kind of intelligence. All of it, the entirety, all those poems, they’re trying to sneak up to what the hell it means to be human. All of that changes. How AI concerns and existential crises that AI experiences, how that clashes with the human existential crisis, the human condition, that’s hard to fathom, hard to predict.

Max Tegmark (14:13):

It’s hard, but it’s fascinating to think about also. Even in the best case scenario where we don’t lose control over the ever more powerful AI that we’re building to other humans whose goals we think are horrible and where we don’t lose control to the machines and AI provides the things we want, even then you get into the questions you touched here. Maybe it’s the struggle that it’s actually hard to do things is part of the things that gives us meaning as well, right?


So, for example, I found it so shocking that this new Microsoft GPT-4 commercial that they put together has this woman talking about, showing this demo of how she’s gonna give a graduation speech to her beloved daughter, and she asks GPT-4 to write it. It was freaking 200 words or so. If I realized that my parents couldn’t be bothered struggling a little bit to write 200 words and outsource that to their computer, I would feel really offended, actually.


And so, I wonder if eliminating too much of the struggle from our existence, do you think that would also take away a little bit of what?

Lex Fridman (15:31):

It means to be human, yeah. We can’t even predict. I had somebody mention to me that they used, they started using Chad GPT with a 3.5 and not 4.0 to write what they really feel to a person, and they have a temper issue, and they’re basically trying to get Chad GPT to rewrite it in a nicer way, to get the point across, but rewrite it in a nicer way. So, we’re even removing the inner asshole from our communication. So, there’s some positive aspects of that, but mostly, it’s just the transformation of how humans communicate. And it’s scary because so much of our society is based on this glue of communication.


And if we’re now using AI as the medium of communication that does the language for us, so much of the emotion that’s laden in human communication, so much of the intent that’s going to be handled by outsourced AI, how does that change everything? How does that change the internal state of how we feel about other human beings? What makes us lonely? What makes us excited? What makes us afraid? How we fall in love? All that kind of stuff.

Max Tegmark (16:51):

Yeah, for me personally, I have to confess, the challenge is one of the things that really makes my life feel meaningful. If I go hike a mountain with my wife, Maya, I don’t want to just press a button and be at the top. I want to struggle and come up there sweaty and feel, wow, we did this.


In the same way, I want to constantly work on myself to become a better person. If I say something in anger that I regret, I want to go back and really work on myself rather than just tell an AI to, from now on, always filter what I write so I don’t have to work on myself because then I’m not growing.

Lex Fridman (17:36):

Yeah, but then again, it could be like with chess. An AI, once it significantly, obviously, supersedes the performance of humans, it will live in its own world and provide maybe a flourishing civilization for humans, but we humans will continue hiking mountains and playing our games, even though AI is so much smarter, so much stronger, so much superior in every single way, just like with chess. I mean, that’s one possible, hopeful trajectory here is that humans will continue to human, and AI will just be a kind of, a medium that enables the human experience to flourish.

Max Tegmark (18:26):

Yeah, I would phrase that as rebranding ourselves from homo sapiens to homo sentiens. Right now, sapiens, the ability to be intelligent, we’ve even put it in our species name, so we’re branding ourselves as the smartest information processing entity on the planet. That’s clearly gonna change if AI continues ahead.


So maybe we should focus on the experience instead, the subjective experience that we have with homo sentiens and say, that’s what’s really valuable, the love, the connection, the other things. Get off our high horses and get rid of this hubris that only we can do integrals.

Lex Fridman (19:14):

So consciousness, the subjective experience is a fundamental value to what it means to be human. Make that the priority.

Max Tegmark (19:24):

That feels like a hopeful direction to me, but that also requires more compassion, not just towards other humans because they happen to be the smartest on the planet, but also towards all our other fellow creatures on this planet. I personally feel right now we’re treating a lot of farm animals horribly, for example, and the excuse we’re using is, oh, they’re not as smart as us. But if we admit that we’re not that smart in the grand scheme of things either, in the post-AI epoch, then surely we should value the subjective experience of a cow also.

Lex Fridman (19:58):

Well, allow me to briefly look at the book, which at this point is becoming more and more visionary that you’ve written, I guess, over five years ago, Life 3.0. So first of all, 3.0, what’s 1.0, what’s 2.0, what’s 3.0? And how’s that vision sort of evolve, the vision in the book evolve to today?

Max Tegmark (20:20):

Life 1.0 is really dumb, like bacteria, in that it can’t actually learn anything at all during the lifetime. The learning just comes from this genetic process from one generation to the next. Life 2.0 is us and other animals which have brains, which can learn during their lifetime a great deal. And so, and you were born without being able to speak English. And at some point you decided, hey, I want to upgrade my software. Let’s install an English speaking module.


So you did. And Life 3.0, which does not exist yet, can replace not only its software the way we can, but also its hardware. And that’s where we’re heading towards at high speed. We’re already maybe 2.1, because we can put in an artificial knee, pacemaker, et cetera, et cetera. And if Neuralink and other companies succeed, we’ll be Life 2.2, et cetera. But the companies trying to build AGI are trying to make this, of course, full 3.0, and you can put that intelligence in something

Lex Fridman (21:39):

that also has no biological basis whatsoever. So less constraints and more capabilities, just like the leap from 1.0 to 2.0. There is, nevertheless, you speaking so harshly about bacteria, so disrespectfully about bacteria, there is still the same kind of magic there that permeates Life 2.0 and 3.0. It seems like maybe the thing that’s truly powerful about life, intelligence, and consciousness was already there in 1.0. Is it possible?

Max Tegmark (22:13):

I think we should be humble and not be so quick to make everything binary and say either it’s there or it’s not. Clearly, there’s a great spectrum, and there is even controversy about whether some unicellular organisms like amoebas can maybe learn a little bit after all. So apologies if I offended any bacteria here. It wasn’t my intent.


It was more that I wanted to talk up how cool it is to actually have a brain where you can learn dramatically within your lifetime. Typical human. And the higher up you get from 1.0 to 2.0 to 3.0, the more you become the captain of your own ship, the master of your own destiny, and the less you become a slave to whatever evolution gave you, right? By upgrading our software, we can be so different from previous generations and even from our parents, much more so than even a bacterium. You know, no offense to them. And if you can also swap out your hardware and take any physical form you want, of course, really the sky’s the limit.

Lex Fridman (23:15):

Yeah, so it accelerates the rate at which you can perform the computation that determines your destiny.

Max Tegmark (23:24):

Yeah, and I think it’s worth commenting a bit on what you means in this context also, if you swap things out a lot, right? This is controversial, but my current understanding is that life is best thought of not as a bag of meat or even a bag of elementary particles, but rather as a system which can process information and retain its own complexity, even though nature is always trying to mess it up. So it’s all about information processing, and that makes it a lot like something like a wave in the ocean, which is not its water molecules, right?


The water molecules bob up and down, but the wave moves forward. It’s an information pattern. In the same way, you, Lex, you’re not the same atoms as during the first- Time we talked, yeah. You did with me. You’ve swapped out most of them, but it’s still you. Yeah. And the information pattern is still there. And if you could swap out your arms and whatever, you can still have this kind of continuity.


It becomes much more sophisticated, sort of wave forward in time where the information lives on. I lost both of my parents since our last podcast, and it actually gives me a lot of solace that this way of thinking about them, they haven’t entirely died because a lot of mommy and daddy’s, I’m sorry, I’m getting a little emotional here, but a lot of their values and ideas and even jokes and so on, they haven’t gone away, right? Some of them live on. I can carry on some of them, and they also live on a lot of other, and a lot of other people. So in this sense, even with Life 2.0, we can, to some extent, already transcend our physical bodies and our death. And particularly if you can share your own information, your own ideas with many others, like you do in your podcast, then that’s the closest to immortality we can get with our bio-bodies.

Lex Fridman (25:45):

You carry a little bit of them in you in some sense. Yeah, yeah. Do you miss them? Do you miss your mom and dad?

Max Tegmark (25:52):

Of course, of course.

Lex Fridman (25:53):

What did you learn about life from them, if we can take a bit of a tangent?

Max Tegmark (25:60):

Oh, so many things. For starters, my fascination for math and the physical mysteries of our universe, I think I got a lot of that from my dad. But I think my obsession for fairly big questions and consciousness and so on, that actually came mostly from my mom.


And what I got from both of them, which is a very core part of really who I am, I think is this, just feeling comfortable with not buying into what everybody else is saying, doing what I think is right. They both very much did their own thing, and sometimes they got flack for it, and they did it anyway.

Lex Fridman (27:02):

That’s why you’ve always been an inspiration to me, that you’re at the top of your field and you’re still willing to tackle the big questions in your own way. You’re one of the people that represents MIT best to me. You’ve always been an inspiration in that. So it’s good to hear that you got that from your mom and dad.

Max Tegmark (27:22):

Yeah, you’re too kind. But yeah, I mean, the good reason to do science is because you’re really curious, you wanna figure out the truth. If you think this is how it is, and everyone else says, no, no, that’s bullshit, and it’s that way, you stick with what you think is true. And even if everybody else keeps thinking it’s bullshit, there’s a certain…


I always root for the underdog when I watch movies. And my dad once, one time, for example, when I wrote one of my craziest papers ever, I’m talking about our universe ultimately being mathematical, which we’re not gonna get into today. I got this email from a quite famous professor saying, this is not only bullshit, but it’s gonna ruin your career. You should stop doing this kind of stuff. I sent it to my dad. Do you know what he said? What did he say? He replied with a quote from Dante. Segui il tuo corso e la sedire la gente. Follow your own path and let the people talk. Go, dad. This is the kind of thing, yeah, he’s dead, but that attitude is not.

Lex Fridman (28:33):

How did losing them as a man, as a human being, change you? How did it expand your thinking about the world? How did it expand your thinking about this thing we’re talking about, which is humans creating another living, sentient, perhaps, being?

Max Tegmark (28:52):

I think it mainly did two things. One of them, just going through all their stuff after they had passed away and so on, just drove home to me how important it is to ask ourselves, why are we doing the things we do? Because it’s inevitable that you look at some things they spent an enormous time on, and you ask, in hindsight, would they really have spent so much time on this? Would they have done something else? Would they have done something that was more meaningful? So I’ve been looking more in my life now and asking, why am I doing what I’m doing? And I feel it should either be something I really enjoy doing, or it should be something that I find really, really meaningful, because it helps humanity.


And if it’s in none of those two categories, maybe I should spend less time on it, you know? The other thing is dealing with death up and personal, like this, it’s actually made me less afraid of, even less afraid of other people telling me that I’m an idiot, you know, which happens regularly, and just, I’m gonna live my life, do my thing, you know? And it’s made it a little bit easier for me to focus on what I feel is really important.

Lex Fridman (30:19):

What about fear of your own death? Has it made it more real that this is something that happens?

Max Tegmark (30:28):

Yeah, it’s made it extremely real, and I’m next in line in our family now, right? Me and my younger brother. But they both handled it with such dignity. That was a true inspiration, also. They never complained about things, and you know, when you’re old and your body starts falling apart, it’s more and more to complain about. They looked at what could they still do that was meaningful, and they focused on that, rather than wasting time talking about, or even thinking much about things they were disappointed in. I think anyone can make themselves depressed if they start their morning by making a list of grievances, whereas if you start your day with a little meditation and just the things you’re grateful for, you basically choose to be a happy person.

Lex Fridman (31:18):

Because you only have a finite number of days you should spend them. Make it count. Being grateful. Well, you do happen to be working on a thing which seems to have potentially some of the greatest impact on human civilization of anything humans have ever created, which is artificial intelligence. This is on the both detailed technical level and in the high philosophical level you work on. So you’ve mentioned to me that there’s an open letter that you’re working on.

Max Tegmark (31:53):

It’s actually going live in a few hours. So I’ve been having late nights and early mornings. It’s been very exciting, actually. In short, have you seen Don’t Look Up? Yes, yes. I don’t want to be the movie spoiler for anyone watching this who hasn’t seen it, but if you’re watching this, you haven’t seen it, watch it. Because we are actually acting out. It’s life imitating art. Humanity is doing exactly that right now, except it’s an asteroid that we are building ourselves. Almost nobody is talking about it. People are squabbling across the planet about all sorts of things which seem very minor compared to the asteroid that’s about to hit us, right? Most politicians don’t even have this on their radar. They think maybe in 100 years or whatever. Right now, we’re at a fork in the road. This is the most important fork humanity has reached in its over 100,000 years on this planet.


We’re building effectively a new species that’s smarter than us. It doesn’t look so much like a species yet because it’s mostly not embodied in robots, but that’s a technicality which will soon be changed. And this arrival of artificial general intelligence that can do all our jobs as well as us, and probably shortly thereafter superintelligence which greatly exceeds our cognitive abilities, it’s gonna either be the best thing ever to happen to humanity or the worst. I’m really quite confident that there is not that much middle ground there.

Lex Fridman (33:34):

But it would be fundamentally transformative

Max Tegmark (33:37):

to human civilization. Of course, utterly and totally. Again, we branded ourselves as Homo sapiens because it seemed like the basic thing. We’re the king of the castle on this planet. We’re the smart ones. If we can control everything else, this could very easily change. We’re certainly not gonna be the smartest on the planet for very long unless AI progress just halts. And we can talk more about why I think that’s true because it’s controversial. And then we can also talk about reasons you might think it’s gonna be the best thing ever and the reason you think it’s gonna be the end of humanity which is, of course, super controversial. But what I think we can, anyone who’s working on advanced AI can agree on is it’s much like the film Don’t Look Up in that it’s just really comical how little serious public debate there is about it given how huge it is.

Lex Fridman (34:42):

So what we’re talking about is a development of currently things like GPT-4 and the signs it’s showing of rapid improvement that may in the near term lead to development of super intelligent AGI, AI, general AI systems and what kind of impact that has on society when that thing achieves general human level intelligence and then beyond that, general superhuman level intelligence.


There’s a lot of questions to explore here. So one, you mentioned halt. Is that the content of the letter? It’s to suggest that maybe we should pause the development of these systems. Exactly.

Max Tegmark (35:29):

So this is very controversial. When we talked the first time, we talked about how I was involved in starting the Future Life Institute and what we worked very hard on 2014, 2015 was the mainstream AI safety. The idea that there even could be risks and that you could do things about them. Before then, a lot of people thought it was just really kooky to even talk about it and a lot of AI researchers felt worried that this was too flaky and could be bad for funding and that the people who talked about it were just not, didn’t understand AI. I’m very, very happy with how that’s gone and that now it’s completely mainstream. You go on any AI conference and people talk about AI safety and it’s a nerdy technical field full of equations and blah, blah, as it should be.


But there’s this other thing which has been quite taboo up until now, calling for slowdown. So what we’ve constantly been saying, including myself, I’ve been biting my tongue a lot, is that we don’t need to slow down AI development, we just need to win this race, the wisdom race between the growing power of the AI and the growing wisdom with which we manage it. And rather than trying to slow down AI, let’s just try to accelerate the wisdom. Do all this technical work to figure out how you can actually ensure that your powerful AI is gonna do what you want it to do and have society adapt also with incentives and regulations so that these things can put to good use. Sadly, that didn’t pan out.


The progress on technical AI and capabilities has gone a lot faster than we anticipated. It’s a lot faster than many people thought back when we started this in 2014. Turned out to be easier to build really advanced AI than we thought. And on the other side, it’s gone much slower than we hoped with getting policy makers and others to actually put incentives in place to steer this in the good direction. Maybe we should unpack it and talk a little bit about each. So why did it go faster than a lot of people thought? In hindsight, it’s exactly like building flying machines.


People spent a lot of time wondering about how do birds fly? And that turned out to be really hard. Have you seen the TED Talk with a flying bird? Like a flying robotic bird? Yeah, it flies around the audience. But it took 100 years longer to figure out how to do that than for the Wright brothers to build the first airplane because it turned out there was a much easier way to fly. And evolution picked the more complicated one because it had its hands tied. It could only build a machine that could assemble itself, which the Wright brothers didn’t care about. It can only build a machine that’ll use only the most common atoms in the periodic table. Wright brothers didn’t care about that. They could use steel, iron atoms. And it had to be able to repair itself but it also had to be incredibly fuel-efficient.


A lot of birds use less than half the fuel of a remote-controlled plane flying the same distance. For humans, just throw a little more, put a little more fuel in a roof. There you go, 100 years earlier. That’s exactly what’s happening now with these large language models. The brain is incredibly complicated. Many people made the mistake of thinking we have to figure out how the brain does human-level AI first before we could build in a machine. That was completely wrong. You can take an incredibly simple computational system called a transformer network and just train it to do something incredibly dumb.


Just read a gigantic amount of text and try to predict the next word. And it turns out, if you just throw a ton of compute at that and a ton of data, it gets to be frighteningly good, like GPT-4, which I’ve been playing with so much since it came out, right? There’s still some debate about whether that can get you all the way to full human level or not, but yeah, we can come back to the details of that and how you might get to human-level AI even if large language models don’t.

Lex Fridman (39:60):

Can you briefly, if it’s just a small tangent, comment on your feelings about GPT-4? Suggest that you’re impressed by this rate of progress, but where is it? Can GPT-4 reason? What are the intuitions? What are human interpretable words you can assign to the capabilities of GPT-4 that makes you so damn impressed with it?

Max Tegmark (40:23):

I’m both very excited about it and terrified. It’s an interesting mixture of emotions.

Lex Fridman (40:30):

All the best things in life include those two somehow.

Max Tegmark (40:33):

Yeah, I can absolutely reason. Anyone who hasn’t played with it, I highly recommend doing that before dissing it. It can do quite remarkable reasoning. I’ve had it do a lot of things, which I realized I couldn’t do that myself that well, even. And obviously does it dramatically faster than we do too when you watch it type. And it’s doing that while servicing a massive number of other humans at the same time. At the same time, it cannot reason as well as a human can on some tasks. It’s obviously a limitation from its architecture.


We have, in our heads, what in geek speak is called a recurrent neural network. There are loops. Information can go from this neuron to this neuron to this neuron to this neuron, and it’s just a loop. Information can go from this neuron to this neuron to this neuron, and then back to this one. You can like ruminate on something for a while. You can self-reflect a lot. These large language models, they cannot. Like GPT-4, it’s a so-called transformer, where it’s just like a one-way street of information, basically, in geek speak. It’s called a feed-forward neural network. And it’s only so deep. So it can only do logic that’s that many steps and that deep, and you can create the problems which will fail to solve for that reason.


But the fact that it can do so amazing things with this incredibly simple architecture already is quite stunning. And what we see in my lab at MIT, when we look inside large language models to try to figure out how they’re doing it, that’s the key core focus of our research. It’s called mechanistic interpretability in geek speak. You have this machine that does something smart. You try to reverse engineer it. See how does it do it?


Or you think of it also as artificial neuroscience. That’s exactly what neuroscientists do with actual brains. But here you have the advantage that you can, you don’t have to worry about measurement errors. You can see what every neuron is doing all the time. And a recurrent thing we see again and again, there’s been a number of beautiful papers quite recently by a lot of researchers, and some of them here, even in this area, is where when they figure out how something is done, you can say, oh man, that’s such a dumb way of doing it. And you immediately see how it can be improved.


Like, for example, there was a beautiful paper recently where they figured out how a large language model stores certain facts, like Eiffel Tower is in Paris, and they figured out exactly how it’s stored. And the proof that they understood it was they could edit it. They changed some of the synapses in it. And then they asked it, where’s the Eiffel Tower? And it said, it’s in Rome. And then they asked you, how do you get there? Oh, how do you get there from Germany? Oh, you take this train to Roma Termini train station, and this and that.


And what might you see if you’re in front of it? Oh, you might see the Colosseum. So they had edited it. So they literally moved it to Rome. But the way that it’s storing this information, it’s incredibly dumb. For any fellow nerds listening to this, there was a big matrix. And roughly speaking, there are certain row and column vectors which encode these things, and they correspond very hand-wavily to principal components. And it would be much more efficient for a sparse matrix to just store in the database, you know?


But in everything so far, we’ve figured out how these things do, are ways where you can see they can easily be improved. And the fact that this particular architecture has some roadblocks built into it is in no way gonna prevent crafty researchers from quickly finding workarounds and making other kinds of architectures sort of go all the way. So in short, it’s turned out to be a lot easier to build close to human intelligence than we thought. And that means our runway as a species to get our shit together has shortened.

Lex Fridman (44:44):

And it seems like the scary thing about the effectiveness of large language models, so Sam Altman I recently had a conversation with, and he really showed that the leap from GPT-3 to GPT-4 has to do with just a bunch of hacks, a bunch of little explorations with smart researchers doing a few little fixes here and there. It’s not some fundamental leap and transformation in the architecture.

Max Tegmark (45:14):

And more data and more compute.

Lex Fridman (45:16):

And more data and compute, but he said the big leaps has to do with not the data and the compute, but just learning this new discipline, just like you said. So researchers are going to look at these architectures and there might be big leaps where you realize, wait, why are we doing this in this dumb way? And all of a sudden this model is 10x smarter. And that can happen on any one day, on any one Tuesday or Wednesday afternoon.


And then all of a sudden you have a system that’s 10x smarter. It seems like it’s such a new discipline. It’s such a new, like we understand so little about why this thing works so damn well that the linear improvement of compute or exponential, but the steady improvement of compute, steady improvement of the data may not be the thing that even leads to the next leap. It could be a surprise little hack that improves everything.

Max Tegmark (46:03):

Or a lot of little leaps here and there because so much of this is out in the open also. And so many smart people are looking at this and trying to figure out little leaps here and there. And it becomes this sort of collective race where a lot of people feel, if I don’t take the leap, someone else will. And this is actually very crucial for the other part of it. Why do we want to slow this down? So again, what this open letter is calling for is just pausing all training of systems that are more powerful than GPT-4 for six months. Just give a chance for the labs to coordinate a bit on safety and for society to adapt, give the right incentives to the labs. Because you’ve interviewed a lot of these people who lead these labs. And you know just as well as I do that they’re good people. They’re idealistic people. They’re doing this first and foremost because they believe that AI has a huge potential to help humanity and, but at the same time, they are trapped in this horrible race to the bottom. Have you read Meditations on Moloch by Scott Alexander? Yes. Yeah, it’s a beautiful essay on this poem by Ginzburg where he interprets it as being about this monster.


It’s this game theory monster. It’s this game theory monster that pits people against each other in this race to the bottom where everybody ultimately loses. The evil thing about this monster is even though everybody sees it and understands, they still can’t get out of the race, right? Most, a good fraction of all the bad things that we humans do are caused by Moloch. And I like Scott Alexander’s naming of the monster so we humans can think of it as a thing. If you look at why do we have overfishing? Why do we have, more generally, the tragedy of the commons? Why is it that, so Liv Borea, I don’t know if you had her on your podcast.

Lex Fridman (48:09):

Yeah, she’s become a friend, yeah.

Max Tegmark (48:11):

Great, she made this awesome point recently that beauty filters that a lot of female influencers feel pressure to use are exactly Moloch in action again. First, nobody was using them and people saw them just the way they were. And then some of them started using it and becoming ever more plastic fantastic. And then the other ones that weren’t using it started to realize that if they wanna just keep their market share, they have to start using it too. And then you’re in a situation where they’re all using it and none of them has any more market share or less than before. So nobody gained anything, everybody lost.


And they have to keep becoming ever more plastic fantastic also, right? But nobody can go back to the old way because it’s just too costly, right? So Moloch is everywhere. And Moloch is not a new arrival on the scene either. We humans have developed a lot of collaboration mechanisms to help us fight back against Moloch through various kinds of constructive collaboration.


The Soviet Union and the United States did sign the number of arms control treaties against Moloch who is trying to stoke them into unnecessarily risky nuclear arms races, et cetera, et cetera. And this is exactly what’s happening on the AI front. This time, it’s a little bit geopolitics but it’s mostly money where there’s just so much commercial pressure. If you take any of these leaders of the top tech companies, if they just say, this is too risky, I wanna pause for six months, they’re gonna get a lot of pressure from shareholders and others who are like, well, if you pause but those guys don’t pause, we don’t wanna get our lunch eaten. And shareholders even have the power to replace the executives in the worst case, right?


So we did this open letter because we wanna help these idealistic tech executives to do what their heart tells them by providing enough public pressure on the whole sector to just pause so that they can all pause in a coordinated fashion. And I think without the public pressure, none of them can do it alone, push back against their shareholders no matter how good-hearted they are. Because Moloch is a really powerful foe.

Lex Fridman (50:47):

So the idea is to, for the major developers of AI systems like this, so we’re talking about Microsoft, Google, Meta, and anyone else?

Max Tegmark (50:60):

Well, OpenAI is very close with Microsoft now, of course, and there are plenty of smaller players. For example, Anthropic is very impressive. There’s Conjecture. There’s many, many, many players. I don’t wanna make a long list to leave anyone out. For that reason, it’s so important that some coordination happens, that there’s external pressure on all of them, saying you all need to pause, because then the researchers in these organizations, the leaders who wanna slow down a little bit, they can say to their shareholders, everybody’s slowing down because of this pressure, and it’s the right thing to do.

Lex Fridman (51:41):

Have you seen in history their examples where it’s possible to pause the Moloch?

Max Tegmark (51:47):

Absolutely. Human cloning, for example. You could make so much money on human cloning. Why aren’t we doing it? Because biologists thought hard about this and felt like this is way too risky. They got together in the 70s in Asilomar and decided even to stop a lot more stuff also, just editing the human germline, gene editing that goes in to our offspring, and decided let’s not do this because it’s too unpredictable what it’s gonna lead to.


We could lose control over what happens to our species, so they paused. There was a ton of money to be made there. So it’s very doable, but you need a public awareness of what the risks are and the broader community coming in and saying, hey, let’s slow down. Another common pushback I get today is we can’t stop in the West because China, and in China, undoubtedly, they also get told we can’t slow down because the West, because both sides think they’re the good guy. But look at human cloning.


Well, did China forge ahead with human cloning? There’s been exactly one human cloning that’s actually been done that I know of. It was done by a Chinese guy. Do you know where he is now? Where? In jail. And do you know who put him there? Who? The Chinese government. Not because Westerners said, China, look, this is… No, the Chinese government put him there because they also felt they like control, the Chinese government. If anything, maybe they are even more concerned about having control than Western governments have no incentive of just losing control over where everything is going. And you can also see the Ernie bot that was released by, I believe, Baidu recently. They got a lot of pushback from the government and had to rein it in in a big way. I think once this basic message comes out that this isn’t an arms race, it’s a suicide race, where everybody loses if anybody’s AI goes out of control, it really changes the whole dynamic.


I’ll say this again, because this is a very basic point I think a lot of people get wrong. Because a lot of people dismiss the whole idea that AI can really get very superhuman, because they think there’s something really magical about intelligence such that it can only exist in human minds. Because they believe that, they think it’s gonna get to just more or less GPT-4++ and then that’s it. They don’t see it as a suicide race. They think whoever gets that first, they’re gonna control the world, they’re gonna win. That’s not how it’s gonna be.


And we can talk again about the scientific arguments from why it’s not gonna stop there. But the way it’s gonna be is if anybody completely loses control and you don’t care, if someone manages to take over the world who really doesn’t share your goals, you probably don’t really even care very much about what nationality they have. You’re not gonna like it, much worse than today. If you live in Orwellian dystopia, what do you care who created it, right? And if someone, if it goes farther and we just lose control even to the machines so that it’s not us versus them, it’s us versus it, what do you care who created this unaligned entity which has goals different from humans ultimately and we get marginalized, we get made obsolete, we get replaced. That’s what I mean when I say it’s a suicide race. It’s kind of like we’re rushing towards this cliff, but the closer to the cliff we get, the more scenic the views are and the more money there is there. So we keep going, but we have to also stop at some point, quit while we’re ahead.


It’s a suicide race, which cannot be won. But the way to really benefit from it is to continue developing awesome AI, a little bit slower, so we make it safe, make sure it does the things that humans want, and create a condition where everybody wins. Technology has shown us that geopolitics and politics in general is not a zero-sum game at all.

Lex Fridman (56:32):

So there is some rate of development that will lead some rate of development that will lead us as a human species to lose control of this thing. And the hope you have is that there’s some lower level of development which will not allow us to lose control. This is an interesting thought you have about losing control. So if you have somebody, if you are somebody like Sander Parchai or Sam Altman at the head of a company like this, you’re saying if they develop an AGI, they too will lose control of it. So no one person can maintain control, no group of individuals can maintain control.

Max Tegmark (57:07):

If it’s created very, very soon and is a big black box that we don’t understand, like the large language models, yeah, then I’m very confident they’re gonna lose control. But this isn’t just me saying it. Sam Altman and Demis Hassabis have both said, they themselves acknowledge that there’s really great risks with this and they wanna slow down once they feel like it’s scary, but it’s clear that they’re stuck in this. Again, Moloch is forcing them to go a little faster than they’re comfortable with because of pressure from just commercial pressures, right?


To get a bit optimistic here, of course, this is a problem that can be ultimately solved. It just, to win this wisdom race, it’s clear that what we hoped was gonna happen hasn’t happened.


The capability progress has gone faster than a lot of people thought and the progress in the public sphere of policymaking and so on has gone slower than we thought. Even the technical AI safety has gone slower. A lot of the technical safety research was kind of banking on that large language models and other poorly understood systems couldn’t get us all the way, that you had to build more of a kind of intelligence you could understand, maybe it could prove itself safe, things like this.


And I’m quite confident that this can be done so we can reap all the benefits, but we cannot do it as quickly as this out of control express train we are on now is gonna get the AGI. That’s why we need a little more time, I feel.

Lex Fridman (58:41):

Is there something to be said, what Sam Altman talked about, which is while we’re in the pre-AGI stage, to release often and as transparently as possible to learn a lot. So, as opposed to being extremely cautious, release a lot. Don’t invest in a closed development where you focus on AI safety. While it’s somewhat dumb, quote-unquote, release as often as possible. And as you start to see signs of human-level intelligence or superhuman-level intelligence, then you put a halt on it.

Max Tegmark (59:20):

Well, what a lot of safety researchers have been saying for many years is that the most dangerous things you can do with an AI is, first of all, teach it to write code. Yeah, because that’s the first step towards recursive self-improvement, which can take it from AGI to much higher levels. Okay, oops, we’ve done that. And another thing, high-risk is connected to the internet. Let it go to websites, download stuff on its own, talk to people. Oops, we’ve done that already. You know, Elias Yudkowsky, you said you interviewed him recently, right? So, he had this tweet recently, which said, gave me one of the best laughs in a while, where he was like, hey, people used to make fun of me and say, you’re so stupid, Eliezer, because you’re saying, you have to worry.


Obviously, developers, once they get to really strong AI, first thing you’re gonna do is never connect it to the internet, keep it in a box where you can really study it. So, he had written it in the meme form. So, it was like, it was a meme, and then he wrote it in the form of a tweet. Like, in the meme form, so it’s like, then, and then that, and then now. Let’s, LOL, let’s make a chatbot. Yeah, yeah, yeah. And the third thing, Stuart Russell, amazing AI researcher, he has argued for a while that we should never teach AI anything about humans. Above all, we should never let it learn about human psychology and how you manipulate humans.


That’s the most dangerous kind of knowledge you can give it. Yeah, you can teach it all it needs to know about how to cure cancer and stuff like that, but don’t let it read Daniel Kahneman’s book about cognitive biases and all that. And then, oops, LOL, let’s invent social media recommender algorithms, which do exactly that. They get so good at knowing us and pressing our buttons that we’re starting to create a world now where we just have ever more hatred, because they figured out that these algorithms, not out of evil, but just to make money on advertising, that the best way to get more engagement, the euphemism, get people glued to their little rectangles, is just to make them pissed off.

Lex Fridman (01:01:38):

That’s really interesting that a large AI system that’s doing the recommender system kind of task on social media is basically just studying human beings, because it’s a bunch of us rats giving it signal, nonstop signal. It’ll show a thing, and then we give signal, and whether we spread that thing, we like that thing, that thing increases our engagement, gets us to return to the platform. It has that on the scale of hundreds of millions of people, constantly, so it’s just learning and learning and learning, and presumably, if the number of parameters in the neural network that’s doing the learning, the more end-to-end the learning is, the more it’s able to just basically encode how to manipulate human behavior, how to control humans at scale.

Max Tegmark (01:02:24):

Exactly, and that is not something you think is in humanity’s interest. Yes. Right now, it’s mainly letting some humans manipulate other humans for profit and power, which already caused a lot of damage, and eventually, that’s a sort of skill that can make AIs persuade humans to let them escape whatever safety precautions we had put. You know, there was a really nice article in the New York Times recently by Yuval Noah Harari and two co-authors, including Tristan Harris from The Social Dilemma, and they have this phrase in there I love. Humanity’s first contact with advanced AI was social media, and we lost that one. We now live in a country where there’s much more hate in the world, where there’s much more hate, in fact, and in our democracy, that we’re having this conversation and people can’t even agree on who won the last election.


And we humans often point fingers at other humans and say it’s their fault, but it’s really Moloch and these AI algorithms. We got the algorithms and then Moloch pitted the social media companies against each other so nobody could have a less creepy algorithm because then they would lose out on revenue to the other company.

Lex Fridman (01:03:45):

Is there any way to win that battle back just if we just linger on this one battle that we’ve lost in terms of social media? Is it possible to redesign social media, this very medium in which we use as a civilization to communicate with each other, to have these kinds of conversations, to have discourse, to try to figure out how to solve the biggest problems in the world, whether that’s nuclear war or the development of AGI? Is it possible to do social media?

Max Tegmark (01:04:14):

I think it’s not only possible, but it’s necessary. Who are we kidding that we’re going to be able to solve all these other challenges if we can’t even have a conversation with each other? It’s constructive. The whole idea, the key idea of democracy is that you get a bunch of people together and they have a real conversation, the ones you try to foster on this podcast where you respectfully listen to people you disagree with. And you realize actually, there are some things actually, some common ground we have, and we both agree, let’s not have a nuclear war, let’s not do that, et cetera, et cetera.


We’re kidding ourselves thinking we can face off the second contact with ever more powerful AI that’s happening now with these large language models if we can’t even have a functional conversation in the public space. That’s why I started the Improve the News Project,, but I’m an optimist fundamentally in that there is a lot of intrinsic goodness in people, and that what makes the difference between someone doing good things for humanity and bad things is not some sort of fairytale thing that this person was born with the evil gene and this one was not born with a good gene, no. I think it’s whether we put, whether people find themselves in situations that bring out the best in them or that bring out the worst in them. And I feel we’re building an internet and a society that brings out the worst.

Lex Fridman (01:05:53):

But it doesn’t have to be that way. No, it does not. It’s possible to create incentives and also create incentives that make money. They both make money and bring out the best in people.

Max Tegmark (01:06:03):

I mean, in the long term, it’s not a good investment for anyone to have a nuclear war, for example. And is it a good investment for humanity if we just ultimately replace all humans by machines and then we’re so obsolete that eventually there are no humans left? Well, it depends, I guess, on how you do the math. But I would say by any reasonable economic standard, if you look at the future income of humans and there aren’t any, that’s not a good investment. Moreover, why can’t we have a little bit of pride in our species, damn it? Why should we just build another species that gets rid of us? If we were Neanderthals, would we really consider it a smart move if we had really advanced biotech to build Homo sapiens? You might say, hey, Max, yeah, let’s build these Homo sapiens, they’re going to be smarter than us. Maybe they can help us defend us better against predators and help fix up our caves, make them nicer. We’ll control them undoubtedly. So then they build a couple, a little baby girl, a little baby boy. And then you have some wise old Neanderthal elders like, hmm, I’m scared that we’re opening a Pandora’s box here and that we’re going to have to build a bunch of Homo sapiens here and that we’re going to get outsmarted by these super Neanderthal intelligences and there won’t be any Neanderthals left. But then you have a bunch of others in the cave, right, are you such a Luddite scaremonger? Of course, they’re going to want to keep us around because we are their creators.


And why, you know, the smarter, I think the smarter they get, the nicer they’re going to get. They’re going to leave us, they’re going to want us around and it’s going to be fine. And besides, look at these babies, they’re so cute. Clearly they’re totally harmless. That’s exactly, those babies are exactly GPT-4. It’s not, I want to be clear, it’s not GPT-4 that’s terrifying. It’s that GPT-4 is a baby technology. You know, and Microsoft even had a paper recently out with a title something like sparkles of AGI. Well, they were basically saying this is baby AI like these little Neanderthal babies.


And it’s going to grow up. There’s going to be other systems from the same company, from other companies, they’ll be way more powerful. But they’re going to take all the things, ideas from these babies. And before we know it, we’re going to be like those last Neanderthals who were pretty disappointed when they realized that they were getting replaced.

Lex Fridman (01:08:44):

Well, this interesting point you make, which is the programming, it’s entirely possible that GPT-4 is already the kind of system that can change everything by writing programs.

Max Tegmark (01:08:57):

Life 3, yeah, because it’s Life 2.0. The systems I’m afraid of are going to look nothing like a large language model and they’re not going to, but once it gets, once it or other people figure out a way of using this tech to make much better tech, right, it’s just constantly replacing its software. And from everything we’ve seen about how these work under the hood, they’re like the minimum viable intelligence. They do everything in the dumbest way that still works sort of. Yeah. And so they are Life 3.0, except when they replaced their software, it’s a lot faster than when you decide to learn Swedish.


And moreover, they think a lot faster than us too. So when, you know, we don’t think on one logical step every nanosecond or so the way they do. And we can’t also just suddenly scale up our hardware massively in the cloud, we’re so limited, right? So they are, they are also Life, consume, become a little bit more like Life 3.0 in that if they need more hardware, hey, just rent it in the cloud, you know? How do you pay for it? Well, with all the services you provide.

Lex Fridman (01:10:23):

And what we haven’t seen yet, which could change a lot, is entire software systems. So right now, programming is done sort of in bits and pieces as an assistant tool to humans. But I do a lot of programming, and with the kind of stuff that GPT-4 is able to do, I mean, it’s replacing a lot what I’m able to do. But you still need a human in the loop to kind of manage the design of things, manage like what are the prompts that generate the kind of stuff, to do some basic adjustment of the code, let’s do some debugging.


But if it’s possible to add on top of GPT-4 kind of feedback loop of self-debugging, improving the code, and then you launch that system out into the wild on the internet because everything is connected, and have it do things, have it interact with humans, and then get that feedback, now you have this giant ecosystem of humans. It’s one of the things that Elon Musk recently tweeted as a case why everyone needs to pay $7 or whatever for Twitter. To make sure they’re real. Make sure they’re real. We’re now going to be living in a world where the bots are getting smarter and smarter and smarter to a degree where you can’t tell the difference between a human and a bot. That’s right. And now you can have bots outnumber humans by one million to one, which is why he’s making the case why you have to pay to prove you’re human, which is one of the only mechanisms to prove, which is depressing.

Max Tegmark (01:11:59):

And yeah, I feel we have to remember, as individuals, we should, from time to time, ask ourselves, why are we doing what we’re doing? Right, and as a species, we need to do that too. So if we’re building, as you say, machines that are outnumbering us and more and more outsmarting us and replacing us on the job market, not just for the dangerous and boring tasks, but also for writing poems and doing art and things that a lot of people find really meaningful, gotta ask ourselves, why? Why are we doing this? We are, the answer is Moloch is tricking us into doing it. Tricking us into doing it.


And it’s such a clever trick that even though we see the trick, we still have no choice but to fall for it, right? Also, the thing you said about you using co-pilot AI tools to program faster, how many times, what factor faster would you say you code now? Does it go twice as fast or?

Lex Fridman (01:13:00):

I don’t really, because it’s such a new tool. It’s, I don’t know if speed is significantly improved, but it feels like I’m a year away from being five to 10 times faster.

Max Tegmark (01:13:14):

So if that’s typical for programmers, then you’re already seeing another kind of self, recursive self-improvement, right? Because previously, one, like, a major generation of improvement of the code would happen on the human R&D timescale. And now, if that’s five times shorter, then it’s gonna take five times less time than it otherwise would to develop the next level of these tools, and so on. So this, these are the, this is exactly the sort of beginning of an intelligence explosion. There can be humans in the loop a lot in the early stages, and then eventually, humans are needed less and less in the machine learning world. Humans are needed less and less, and the machines can more kind of go along. But what you said there is just an exact example of these sort of things. Another thing which, I was kind of lying on my psychiatrist, imagining I’m on a psychiatrist’s couch here, saying, well, what are my fears that people would do with AI systems?


Another fear, so I mentioned three that I had fears about many years ago that they would do, namely, teach you the code, connected to the internet, and teach it to manipulate humans. A fourth one is building an API, where code can control this super powerful thing, right? That’s very unfortunate, because one thing that systems like GPT-4 have going for them is that they are an oracle, in the sense that they just answer questions. There is no robot connected to GPT-4. GPT-4 can’t go and do stock trading based on its thinking. It is not an agent. An intelligent agent is something that takes in information from the world, processes it to figure out what action to take based on its goals that it has, and then does something back on the world. But once you have an API, for example, GPT-4, nothing stops Joe Schmo and a lot of other people from building real agents, which just keep making calls somewhere in some inner loop somewhere to these powerful oracle systems, which makes them themselves much more powerful. That’s another kind of unfortunate development, which I think we would have been better off delaying. I don’t want to pick on any particular companies. I think they’re all under a lot of pressure to make money.


Again, the reason we’re calling for this pause is to give them all cover to do what they know is the right thing, slow down a little bit at this point. But everything we’ve talked about, I hope we’ll make it clear to people watching this why these sort of human-level tools can cause a gradual acceleration. You keep using yesterday’s technology to build tomorrow’s technology, and when you do that over and over again, you naturally get an explosion. That’s the definition of an explosion in science. Like if you have two people and they fall in love, now you have four people, and then they can make more babies and now you have eight people, and then you have 16, 32, 64, et cetera. We call that a population explosion, where it’s just that each, if it’s instead three neutrons in a nuclear reaction, if each one can make more than one, then you get an exponential growth in that. We call it a nuclear explosion. All explosions are like that. And an intelligence explosion, it’s just exactly the same principle, that some amount of intelligence can make more intelligence than that, and then repeat. You always get exponentials.

Lex Fridman (01:16:55):

What’s your intuition, why does, you mentioned there’s some technical reasons why it doesn’t stop at a certain point. What’s your intuition, and do you have any intuition why it might stop?

Max Tegmark (01:17:07):

It’s obviously going to stop when it bumps up against the laws of physics. There are some things you just can’t do no matter how smart you are, right?

Lex Fridman (01:17:13):

Allegedly. And Seth Lloyd- Because we don’t know the full laws of physics yet, right?

Max Tegmark (01:17:18):

Seth Lloyd wrote a really cool paper on the physical limits on computation, for example. If you make it, put too much energy into it and the finite space, it’ll turn into a black hole. You can’t move information around faster than the speed of light, stuff like that. But it’s hard to store way more than a modest number of bits per atom, et cetera. But those limits are just astronomically above, like 30 orders of magnitude above where we are now. So, bigger different, bigger jump in intelligence than if you go from an ant to a human. I think, of course, what we want to do is have a controlled thing, like a nuclear reactor. You put moderators in to make sure exactly it doesn’t blow up out of control, right? When we do experiments with biology and cells and so on, we also try to make sure it doesn’t get out of control. We can do this with AI, too. The thing is, we haven’t succeeded yet. And Moloch is exactly doing the opposite, just fueling, just egging everybody on, faster, faster, faster, or the other company is going to catch up with you, or the other country is going to catch up with you. We have to want this stuff. We have to, and I don’t believe in this, just asking people to look into their hearts and do the right thing. It’s easier for others to say that, but like, I don’t believe in that. For others to say that, but like, if you’re in a situation where your company is going to get screwed, if you, by other companies that are not stopping, you’re putting people in a very hard situation, the right thing to do is change the whole incentive structure instead. And this is not an old, maybe I should say one more thing about this, because Moloch has been around as humanity’s number one or number two enemy since the beginning of civilization. And we came up with some really cool countermeasures. Like, first of all, already over 100,000 years ago, evolution realized that it was very unhelpful that people kept killing each other all the time. So it genetically gave us compassion and made it so that, like, if you get two drunk dudes getting into a pointless bar fight, they might give each other black eyes, but, and they have a lot of inhibition towards just killing each other. That’s a, and similarly, if you find a baby lying on the street when you go out for your morning jog tomorrow, you’re going to stop and pick it up, right? Even though it may make you late for your next podcast. So evolution gave us these genes that make our own egoistic incentives more aligned with what’s going on in the world. So egoistic incentives more aligned with what’s good for the greater group we’re part of, right? And then as we got a bit more sophisticated and developed language, we invented gossip, which is also a fantastic anti-moloch, right? Because now it really discourages liars, moochers, cheaters, their own incentive now is not to do this, because word quickly gets around and then suddenly people aren’t going to invite them to their dinners anymore or trust them. And then when we got still more sophisticated and bigger societies, we invented the legal system where even strangers who couldn’t rely on gossip and things like this would treat each other, would have an incentive. Now those guys in the bar fight, even if someone is so drunk that he actually wants to kill the other guy, he also has a little thought in the back of his head that do I really want to spend the next 10 years eating really crappy food in a small room? I’m just going to chill out. And we similarly have tried to give these incentives to our corporations by having regulation and all sorts of oversight so that their incentives are aligned with the greater good. We’ve tried really hard. And the big problem that we’re failing now is not that we haven’t tried before, but it’s just that the tech is growing much, is developing much faster than the regulators have been able to keep up, right?


So regulators, it’s kind of comical that European Union right now is doing this AI act, right? Which, and in the beginning, they had a little opt-out exception that GPT-4 would be completely excluded from regulation. Brilliant idea. What’s the logic behind that? Some lobbyists pushed successfully for this. So we were actually quite involved with the Future of Life Institute, Mark Brackel, Christo Ouk, Anthony Aguirre, and others. You know, we’re quite involved with educating various people involved in this process about these general purpose AI models coming and pointing out that they would become the laughing stock if they didn’t put it in.


So the French started pushing for it. It got put in to the draft and it looked like all was good. And then there was a huge counter push from lobbyists. There were more lobbyists in Brussels from tech companies than from oil companies, for example. And it looked like it might maybe get taken out again. And now GPT-4 happened. And I think it’s gonna stay in. But this just shows, you know, Moloch can be defeated.


But the challenge we’re facing is that the tech is generally much faster than what the policymakers are. And a lot of the policymakers also don’t have a tech background. So it’s, you know, we really need to work hard to educate them on what’s taking place here. So we’re getting the situation where the first kind of non… So I define artificial intelligence just as non-biological intelligence. And by that definition, a company, a corporation is also an artificial intelligence because the corporation isn’t, it’s humans, it’s a system. If its CEO decides, if the CEO of a tobacco company decides one morning that she or he doesn’t wanna sell cigarettes anymore, they’ll just put another CEO in there. So it’s not enough to align the incentives of individual people or align individual computers incentives to their owners, which is what technically AI safety research is about. You also have to align the incentives of corporations with a greater good. And some corporations have gotten so big and so powerful very quickly that in many cases, their lobbyists instead align the regulators with what they want rather than the other way around. It’s a classic regulatory capture.

Lex Fridman (01:24:14):

All right, is the thing that the slowdown hopes to achieve is give enough time to the regulators to catch up or enough time to the companies themselves to breathe and understand how to do AI safety correctly?

Max Tegmark (01:24:27):

I think both, but I think that the vision, the path to success I see is first you give a breather actually to the people in these companies, their leadership who wants to do the right thing and they all have safety teams and so on on their companies. Give them a chance to get together with the other companies and the outside pressure can also help catalyze that, right? And work out what is it that’s, what are the reasonable safety requirements one should put on future systems before they get rolled out? There are a lot of people also in academia and elsewhere outside of these companies who can be brought into this and have a lot of very good ideas. And then I think it’s very realistic that within six months you can get these people coming up, so here’s a white paper, here’s where we all think it’s reasonable. Just because cars killed a lot of people, it didn’t ban cars, but they got together a bunch of people and decided in order to be allowed to sell a car, it has to have a seatbelt in it.


They’re the analogous things that you can start requiring a future AI systems so that they are safe. And once this heavy lifting, this intellectual work has been done by experts in the field which can be done quickly, I think it’s gonna be quite easy to get policy makers to see, yeah, this is a good idea. And for the companies to fight Moloch, they want, and I believe Sam Altman has explicitly called for this, they want the regulators to actually adopt it so that their competition is gonna abide by it too. You don’t wanna be enacting all these principles and then you abide by them and then there’s this one little company that doesn’t sign on to it and then now they can gradually overtake you. Then the companies will be able to sleep secure knowing that everybody’s playing by the same rules.

Lex Fridman (01:26:35):

So do you think it’s possible to develop guardrails that keep the systems from basically damaging irreparably humanity while still enabling sort of the capitalist-fueled competition between companies as they develop how to best make money with this AI? You think there’s a balancing that’s possible?

Max Tegmark (01:26:57):

Absolutely, I mean, we’ve seen that in many other sectors where you’ve had the free market produce quite good things without causing particular harm. When the guardrails are there and they work, capitalism is a very good way of optimizing for just getting the same things done more efficiently. But it was good, and in hindsight, I’ve never met anyone, even on parties way over on the right in any country who thinks it was a terrible idea to ban child labor, for example.

Lex Fridman (01:27:34):

Yeah, but it seems like this particular technology has gotten so good so fast, it’s become powerful to a degree where you could see in the near term the ability to make a lot of money. And to put guardrails, to develop guardrails quickly in that kind of context seems to be tricky. It’s not similar to cars or child labor. It seems like the opportunity to make a lot of money here very quickly is right here before us.

Max Tegmark (01:27:60):

There’s this cliff.

Lex Fridman (01:28:03):

And the closer you get to the cliff, the more money there is.

Max Tegmark (01:28:04):

The more gold ingots there are on the ground you can pick up or whatever if you want to drive there very fast, but it’s not in anyone’s incentive that we go over the cliff. And it’s not like everybody’s in their own car. All the cars are connected together with a chain. So if anyone goes over, they’ll start dragging the others down, the others down too. And so ultimately it’s in the selfish interests also of the people in the companies to slow down when you start seeing the contours of the cliff there in front of you, right? And the problem is that even though the people who are building the technology and the CEOs, they really get it, the shareholders and these other market forces, they are people who don’t honestly understand that the cliff is there. They usually don’t, you have to get quite into the weeds to really appreciate how powerful this is and how fast. And a lot of people are even still stuck again in this idea that in this carbon chauvinism, as I like to call it, that you can only have our level of intelligence in humans, that there’s something magical about it. Whereas the people in the tech companies who build this stuff, they all realize that intelligence is information processing of a certain kind. And it really doesn’t matter at all whether the information is processed by carbon atoms in neurons in brains or by silicon atoms in some technology we build.


So you brought up capitalism earlier and there are a lot of people who love capitalism and a lot of people who really, really don’t. And it struck me recently that what’s happening with capitalism here is exactly analogous to the way in which superintelligence might wipe us out. So, you know, I studied economics for my undergrad, Stockholm School of Economics, yay. No, no, I tell me. So I was very interested in how you could use market forces to just get stuff done more efficiently, but give the right incentives to the market so that it wouldn’t do really bad things.


So Dylan Hadfield-Manel, who’s a professor and colleague of mine at MIT, wrote this really interesting paper with some collaborators recently where they proved mathematically that if you just take one goal that you just optimize for on and on and on indefinitely, you think it’s gonna bring you in the right direction. What basically always happens is in the beginning, it will make things better for you. But if you keep going, at some point, it’s gonna start making things worse for you again. And then gradually it’s gonna make it really, really terrible. So just as a simple, the way I think of the proof is, suppose you wanna go from here back to Austin, for example, and you’re like, okay, yeah, let’s just, let’s go south, but you put in exactly the right, sort of the right direction. Just optimize that, south as possible. You get closer and closer to Austin, but there’s always some little error. So you’re not going exactly towards Austin, but you get pretty close. But then eventually you start going away again, and eventually you’re gonna be leaving the solar system. And they proved, it’s a beautiful mathematical proof. This happens generally, and this is very important for AI, because even though Stuart Russell has written a book and given a lot of talks on why it’s a bad idea to have AI just blindly optimize something, that’s what pretty much all our systems do. We have something called the loss function that we’re just minimizing, or reward function, we’re just maximizing stuff, and capitalism is exactly like that too. We wanted to get stuff done more efficiently than people wanted. So we introduced the free market.


Things got done much more efficiently than they did in, say, communism, right? And it got better. And it got better, but then it just kept optimizing, and kept optimizing, and you got ever bigger companies and ever more efficient information processing, and now also very much powered by IT. And eventually a lot of people are beginning to feel, wait, we’re kind of optimizing a bit too much. Like, why did we just chop down half the rainforest? And why did suddenly these regulators get captured by lobbyists, and so on? It’s just the same optimization that’s been running for too long. If you have an AI that actually has power over the world, and you just give it one goal, and just keep optimizing that, most likely everybody’s gonna be like, yay, this is great in the beginning, things are getting better.


But it’s almost impossible to give it exactly the right direction to optimize in, and then eventually all hay breaks loose, right? Nick Bostrom and others have given examples that sound quite silly, like what if you just wanna tell it to cure cancer or something, and that’s all you tell it? Maybe it’s gonna decide to take over entire continents just so it can get more supercomputer facilities in there, and figure out how to cure cancer backwards, and then you’re like, wait, that’s not what I wanted, right?


And the issue with capitalism, and the issue with the front-end AI have kind of merged now, because the Moloch I talked about is exactly the capitalist Moloch, that we have built an economy that is optimized for only one thing, profit. And that worked great back when things were very inefficient, and then now it’s getting done better, and it worked great as long as the companies were small enough that they couldn’t capture the regulators, but that’s not true anymore, but they keep optimizing, and now they realize that these companies can make even more profit by building ever more powerful AI, even if it’s reckless, but optimize more and more and more and more and more.


So this is Moloch again showing up, and I just wanna, anyone here who has any concerns about late-stage capitalism having gone a little too far, you should worry about superintelligence, because it’s the same villain in both cases, it’s Moloch.

Lex Fridman (01:34:44):

And optimizing one objective function aggressively, blindly is going to take us there.

Max Tegmark (01:34:52):

So yeah, we have this pause from time to time, and look into our hearts and ask, why are we doing this? Is this, am I still going towards Austin, or have I gone too far? Maybe we should change direction.

Lex Fridman (01:35:06):

And that is the idea behind a halt for six months. Why six months? That seems like a very short period. Can we just linger and explore different ideas here? Because this feels like a really important moment in human history, where pausing would actually have a significant positive effect.

Max Tegmark (01:35:24):

We said six months because we figured the number one pushback we were gonna get in the West was like, but China. And everybody knows there’s no way that China is gonna catch up with the West on this in six months, so that argument goes off the table, and you can forget about geopolitical competition and just focus on the real issue. That’s why we put this.

Lex Fridman (01:35:51):

That’s really interesting. But you’ve already made the case that even for China, if you actually wanna take on that argument, China too would not be bothered by a longer halt, because they don’t wanna lose control, even more than the West doesn’t. That’s what I think. That’s a really interesting argument. I have to actually really think about that, which the kind of thing people assume is if you develop an AGI, that open AI, if they’re the ones that do it, for example, they’re going to win. But you’re saying no, everybody loses.

Max Tegmark (01:36:25):

Yeah, it’s gonna get better and better and better, and then kaboom, we all lose. That’s what’s gonna happen.

Lex Fridman (01:36:31):

When lose and win are defined on a metric of basically quality of life for human civilization and for Sam Altman.

Max Tegmark (01:36:40):

Both. To be blunt, my personal guess, and people can quibble with this, is that we’re just gonna, there won’t be any humans. That’s it. That’s what I mean by lose. We can see in history, once you have some species or some group of people who aren’t needed anymore, doesn’t usually work out so well for them, right? Yeah.


There were a lot of horses that were used for traffic in Boston, and then the car got invented, and most of them got, you know, we don’t need to go there. And if you look at humans, right now, why did the labor movement succeed after the Industrial Revolution? Because it was needed. Even though we had a lot of Molochs, and there was child labor, and so on, you know, the company still needed to have workers. And that’s why strikes had power, and so on.


If we get to the point where most humans aren’t needed anymore, I think it’s quite naive to think that they’re gonna still be treated well. You know, we say that, yeah, yeah, everybody’s equal, and the government will always, we’ll always protect them. But if you look in practice, groups that are very disenfranchised and don’t have any actual power usually get screwed. And now, in the beginning, so Industrial Revolution, we automated away muscle work. But that got, went, worked out pretty well eventually, because we educated ourselves and started working with our brains instead, and got usually more interesting, better paid jobs. But now, we’re getting to the point where we replaced brain work, so we replaced a lot of boring stuff, like we got the pocket calculator, so you don’t have people adding, multiplying numbers anymore at work. Fine, there were better jobs they could get. But now, GPT-4, you know, and the stable diffusion and techniques like this, they’re really beginning to blow away some real, some jobs that people really love having. It was a heartbreaking article just post just now, a heartbreaking article just post just yesterday on social media I saw about this guy who was doing 3D modeling for gaming, and all of a sudden, now that he got this new software, he just says prompts, and he feels his whole job that he loved just lost its meaning, you know? And I asked GPT-4 to rewrite Twinkle, Twinkle Little Star in the style of Shakespeare.


I couldn’t have done such a good job. It was really impressive. You’ve seen a lot of the art coming out here, right? So I’m all for automating away the dangerous jobs and the boring jobs, but I think you hear a lot, some arguments which are too glib. Sometimes people say, well, that’s all that’s gonna happen. We’re getting rid of the boring, tedious, dangerous jobs. It’s just not true. There are a lot of really interesting jobs that are being taken away now. Journalism is gonna get crushed. Coding is gonna get crushed. I predict the job market for programmers, salaries are gonna start dropping.


You know, you said you can code five times faster than you need five times fewer programmers. Maybe there’ll be more output also, but you’ll still end up needing fewer programmers than today. And I love coding. You know, I think it’s super cool. So we need to stop and ask ourselves why, again, are we doing this as humans, right?


I feel that AI should be built by humanity for humanity. And let’s not forget that. It shouldn’t be by Moloch for Moloch, or what it really is now is kind of by humanity for Moloch, which doesn’t make any sense. It’s for us that we’re doing it. And it would make a lot more sense if we build it, develop, figure out gradually and safely how to make all this tech. And then we think about what are the kind of jobs that people really don’t wanna have, you know? Automate them all away. And then we ask what are the jobs that people really find meaning in?


Like maybe taking care of children in the daycare center, maybe doing art, et cetera, et cetera. And even if it were possible to automate that way, we don’t need to do that, right? We built these machines.

Lex Fridman (01:41:09):

Well, it’s possible that we redefine or rediscover what are the jobs that give us meaning. So for me, the thing, it is really sad. Like I, half the time I’m excited, half the time I’m crying as I’m generating code because I kind of love programming. It’s an act of creation.


You have an idea, you design it, and then you bring it to life and it does something, especially if there’s some intelligence to it, it does something. It doesn’t even have to have intelligence. Printing Hello World on screen, you made a little machine and it comes to life. And there’s a bunch of tricks you learn along the way because you’ve been doing it for many, many years. And then to see AI, be able to generate all the tricks you thought were special. I don’t know, it’s very, it’s scary, it’s almost painful.


Like a loss of innocence maybe, like maybe when I was younger. I remember before I learned that sugar is bad for you, you should be on a diet. I remember I enjoyed candy deeply. In a way I just can’t anymore. That I know is bad for me. I enjoyed it unapologetically, fully, just intensely. And I just, I lost that. Now I feel like a little bit of that is lost for me with programming, or being lost with programming. Similar as it is for the 3D modeler, no longer being able to really enjoy the art of modeling 3D things for gaming. I don’t know, I don’t know what to make sense of that. Maybe I would rediscover that the true magic of what it means to be human is connecting with other humans, to have conversations like this. I don’t know, to have sex, to eat food, to really intensify the value from conscious experiences versus like creating other stuff.

Max Tegmark (01:43:09):

You’re pitching the rebranding again from Homo sapiens to Homo sentiens. The meaningful experiences. And just to inject some optimism in this here so we don’t sound like a bunch of gloomers. You know, we can totally have our cake and eat it. You hear a lot of totally bullshit claims that we can’t afford having more teachers. Have to cut the number of nurses. You know, that’s just nonsense, obviously. With anything, even quite far short of AGI, we can dramatically improve, grow the GDP and produce a wealth of goods and services. It’s very easy to create a world where everybody’s better off than today. Including the richest people can be better off as well. It’s not a zero-sum game in technology. Again, you can have two countries like Sweden and Denmark have all these ridiculous wars century after century.


And sometimes that Sweden got a little better off because it got a little bit bigger. And then Denmark got a little bit better off because Sweden got a little bit smaller. But then technology came along and we both got just dramatically wealthier without taking it away from anyone else. It was just a total win for everyone. And AI can do that on steroids. If you can build safe AGI, if you can build super intelligence, basically all of the limitations that cause harm today can be completely eliminated. It’s a wonderful possibility. And this is not sci-fi. This is something which is clearly possible according to the laws of physics. And we can talk about ways of making it safe also. But unfortunately, that’ll only happen if we steer in that direction.


That’s absolutely not the default. That’s why income inequality keeps going up. That’s why the life expectancy in the U.S. has been going down. Now I think it’s four years in a row. I just read a heartbreaking study from the CDC about how something like one-third of all teenage girls in the U.S. have been thinking about suicide. Those are steps in totally the wrong direction.


And it’s important to keep our eyes on the prize here. That we have the power now for the first time in the history of our species to harness artificial intelligence to help us really flourish and help bring out the best in our humanity rather than the worst of it. To help us really make the most of our time to help us have really fulfilling experiences that feel truly meaningful. And you and I shouldn’t sit here and dictate to future generations what they will be. Let them figure it out. But let’s give them a chance to live and not foreclose all these possibilities for them by just messing things up, right?

Lex Fridman (01:46:01):

And for that, we’ll have to solve the AI safety problem. It would be nice if we can linger on exploring that a little bit. So one interesting way to enter that discussion is you tweeted and Elon replied. You tweeted, let’s not just focus on whether GPT-4 will do more harm or good on the job market, but also whether its coding skills will hasten the arrival of superintelligence. That’s something we’ve been talking about, right? So Elon proposed one thing in the reply, saying maximum truth-seeking is my best guess for AI safety. Can you maybe steel man the case for this objective function of truth and maybe make an argument against it? And in general, what are your different ideas to start approaching the solution to AI safety?

Max Tegmark (01:46:51):

I didn’t see that reply, actually.

Lex Fridman (01:46:53):

Oh, interesting.

Max Tegmark (01:46:53):

But I really resonate with it, because AI is not evil. It caused people around the world to hate each other much more, but that’s because we made it in a certain way. It’s a tool. We can use it for great things and bad things, and we could just as well have AI systems. And this is part of my vision for success here, truth-seeking AI that really brings us together again. You know, why do people hate each other so much between countries and within countries? It’s because they each have totally different versions of the truth, right?


If they all had the same truth that they trusted for good reason, because they could check it and verify it and not have to believe in some self-proclaimed authority, right? There wouldn’t be nearly as much hate. There’d be a lot more understanding instead. And this is, I think, something AI can help enormously with. For example, a little baby step in this direction is this website called Metaculous, where people bet and make predictions, not for money, but just for their own reputation. And it’s kind of funny, actually. You treat the humans like you treat AI, as you have a loss function where they get penalized if they’re super confident on something, and then the opposite happens. Whereas if you’re kind of humble, and then you’re like, I think it’s 51% chance this is going to happen, and then the other happens, you don’t get penalized much. And what you can see is that some people are much better at predicting than others. They’ve earned your trust, right? One project that I’m working on right now is an outgrowth to improve the News Foundation together with the Metaculous folks, is seeing if we can really scale this up a lot with more powerful AI. Because I would love it if we could do that. I would love for there to be a really powerful truth-seeking system that is trustworthy because it keeps being right about stuff. And people who come to it, and maybe look at its latest trust ranking of different pundits and newspapers, et cetera, if they want to know why someone got a low score, they can click on it and see all the predictions that they actually made, and how they turned out.


This is how we do it in science. You trust scientists like Einstein who said something everybody thought was bullshit, and turned out to be right. You get a lot of trust points, and he did it multiple times, even. I think AI has the power to really heal a lot of the rifts we’re seeing by creating trust systems. Get away from this idea today with some fact-checking site, which might themselves have an agenda, and you just trust it because of its reputation. You want to have these sort of systems, they earn their trust, and they’re completely transparent. This, I think, would actually help a lot. That can, I think, help heal the very dysfunctional conversation that humanity has about how it’s going to deal with all this stuff. That’s what humanity has about how it’s going to deal with all its biggest challenges in the world today. And then, on the technical side, another common sort of gloom comment I get from people who are saying, we’re just screwed, there’s no hope, is, well, things like GPT-4 are way too complicated for a human to ever understand, and prove that they can be trustworthy.


They’re forgetting that AI can help us prove that things work, right? And there’s this very fundamental fact that, in math, it’s much harder to come up with a proof than it is to verify that the proof is correct. You can actually write a little proof-checking code, which is quite short, but you can, as a human, understand it. And then it can check the most monstrously long proof ever generated, even by a computer, and say, yeah, this is valid. This is valid. So right now, we have this approach with virus-checking software that it looks to see if there’s something, if you should not trust it. And if it can prove to itself that you should not trust that code, it warns you, right? What if you flip this around? And this is an idea I should give credit to Steve, I’m a hundo for, so that it will only run the code if it can prove, instead of not running it, if it can prove that it’s not trustworthy, if it will only run it if it can prove that it’s trustworthy. So it asks the code, prove to me that you’re going to do what you say you’re going to do. And it gives you this proof.


And you, a little proof-checker, can check it. Now you can actually trust an AI that’s much more intelligent than you are, right? Because it’s a problem to come up with this proof that you could never have found, but you should trust it.

Lex Fridman (01:51:54):

So this is the interesting point. I agree with you, but this is where Eliezer Yakovsky might disagree with you. His claim, not with you, but with this idea. His claim is a super-intelligent AI would be able to know how to lie to you with such a proof.

Max Tegmark (01:52:14):

How to lie to you and give me a proof that I’m going to think is correct? But it’s not me that’s lying to you. That’s the trick my proof-checker.

Lex Fridman (01:52:22):

It’s just a piece of code. So his general idea is a super-intelligent system can lie to a dumber proof-checker. So you’re going to have, as a system becomes more and more intelligent, there’s going to be a threshold where a super-intelligent system would be able to effectively lie to a slightly dumber AGI system. He really focuses on this weak AGI to strong AGI jump where the strong AGI can make all the weak AGIs think that it’s just one of them, but it’s no longer that.

Max Tegmark (01:52:60):

And that leap is when it runs away from you. I don’t buy that argument. I think no matter how super-intelligent an AI is, it’s never going to be able to prove to me that there are only finitely many primes, for example. It just can’t. And it can try to snow me by making up all sorts of new weird rules of deduction that say, trust me, the way your proof-checker works is too limited. And we have this new hyper-math, and it’s true. But then I would just take the attitude, okay, I’m going to forfeit some of these supposedly super-cool technologies. I’m only going to go with the ones that I can prove with my own trusted proof-checker.


Then I think it’s fine. There’s still, of course, this is not something anyone has successfully implemented at this point, but I think I just give it as an example of hope. We don’t have to do all the work ourselves. This is exactly the sort of very boring and tedious task that is perfect to outsource to an AI. And this is a way in which less powerful and less intelligent agents like us can actually continue to control and trust more powerful ones.

Lex Fridman (01:54:10):

So build AGI systems that help us defend against other AGI systems.

Max Tegmark (01:54:14):

Well, for starters, begin with a simple problem of just making sure that the system that you own or that’s supposed to be loyal to you has to prove to itself that it’s always going to do the things that you actually want it to do. And if it can’t prove it, maybe it’s still going to do it, but you won’t run it. So you just forfeit some aspects of all the cool things AI can do. I bet you dollars and donuts it can still do some incredibly cool stuff for you. Yeah.


There are other things too that we shouldn’t sweep under the rug, like not every human agrees on exactly what direction we should go with humanity, right? Yes. And you’ve talked a lot about geopolitical things on your podcast to this effect, but I think that shouldn’t distract us from the fact that there are actually a lot of things that everybody in the world virtually agrees on, that, hey, having no humans on the planet in a near future, let’s not do that, right? You look at something like the United Nations Sustainable Development Goals, some of them are quite ambitious, and basically all the countries agree, US, China, Russia, Ukraine, they all agree. So instead of quibbling about the little things we don’t agree on, let’s start with the things we do agree on and get them done. Instead of being so distracted by all these things we disagree on, let’s say that Moloch wins because, frankly, Moloch going wild now, it feels like a war on life playing out in front of our eyes. If you just look at it from space, we’re on this planet, beautiful, vibrant ecosystem. Now we start chopping down big parts of it, even though most people thought that was a bad idea. Always start doing ocean acidification, wiping out all sorts of species. Oh, now we have all these close calls. We almost had a nuclear war.


And we’re replacing more and more of the biosphere with non-living things. We’re also replacing in our social lives a lot of the things which were so valuable to humanity. A lot of social interactions now are replaced by people staring into their rectangles, right? And I’m not a psychologist, I’m out of my depth here, but I suspect that part of the reason why teen suicide and suicide in general in the US, the record-breaking levels is actually caused by, again, AI technologies and social media making people spend less time with actually just human interaction. We’ve all seen a bunch of good-looking people in restaurants staring into their rectangles instead of looking into each other’s eyes, right? So that’s also a part of the war on life, that we’re replacing so many really life-affirming things by technology. We’re putting technology between us.


The technology that was supposed to connect us is actually distancing us, ourselves, from each other. And then we’re giving ever more power to things which are not alive. These large corporations are not living things, right? They’re just maximizing profit. I want to win the war on life. I think we humans, together with all our fellow living things on this planet, will be better off if we can remain in control over the non-living things and make sure that they work for us. I really think it can be done.

Lex Fridman (01:57:55):

Can you just linger on this maybe high-level philosophical disagreement with Eliezer Yudkowsky in the hope you’re stating? So he is very sure, he puts a very high probability, very close to one, depending on the day he puts it at one, that AI is going to kill humans. That there’s just, he does not see a trajectory which it doesn’t end up with that conclusion. What trajectory do you see that doesn’t end up there? And maybe can you see the point he’s making? And can you also see a way out?

Max Tegmark (01:58:40):

Mm-hmm, first of all, I tremendously respect Eliezer Yudkowsky and his thinking. Second, I do share his view that there’s a pretty large chance that we’re not gonna make it as humans, that there won’t be any humans on the planet in the not-too-distant future. And that makes me very sad. We just had a little baby, and I keep asking myself, you know, is how old is he even gonna get?


And I ask myself, it feels, I said to my wife recently, it feels a little bit like I was just diagnosed with some sort of cancer, which has some risk of dying from and some risk of surviving, you know. Except this is the kind of cancer which will kill all of humanity. So I completely take seriously his concerns. I think, but I don’t, absolutely don’t think it’s hopeless.


I think there is, first of all, a lot of momentum now for the first time, actually, since the many, many years that have passed, since I and many others started warning about this, I feel, most people are getting it now. I was just talking to this guy in the gas station near our house the other day, my, and he’s like, I think we’re getting replaced, and I think, so that’s positive that they’re finally, we’re finally seeing this reaction, which is the first step towards solving the problem. And I think it’s a good thing. I think it’s a good thing. I think it’s a good thing. I think it’s a good thing. I think it’s a good step towards solving the problem. Second, I really think that this vision of only running AIs, really, if the stakes are really high, they can prove to us that they’re safe, it’s really just virus checking in reverse again. I think it’s scientifically doable. I don’t think it’s hopeless.


We might have to forfeit some of the technology that we could get if we were putting blind faith in our AIs, but we’re still gonna get amazing stuff.

Lex Fridman (02:00:53):

Do you envision a process with a proof checker, like something like GPT-4 or GPT-5, would go through a process,

Max Tegmark (02:00:58):

just a rigorous interrogation? No, no, I think it’s hopeless. That’s like trying to proof there about five spaghetti. Okay. What I think, well, the vision I have for success is instead that just like we human beings were able to look at our brains and distill out the key knowledge. Galileo, when his dad threw him an apple when he was a kid, he was able to catch it because his brain could in this funny spaghetti kind of way predict how parabolas are gonna move. His quantum on system one, right? But then he got older and it’s like, wait, this is a parabola. It’s Y equals X squared. I can distill this knowledge out and today you can easily program it into a computer and it can simulate not just that, but how to get to Mars and so on, right? And vision is similar process where we use the amazing learning power of neural networks to discover the knowledge in the first place, but we don’t stop with a black box and use that.


We then do a second round of AI where we use automated systems to extract out the knowledge and see what are the insights it’s had. And then we put that knowledge into a completely different kind of architecture or programming language or whatever that’s made in a way that it can be both really efficient and also is more amenable to very formal verification.


That’s my vision. I’m not sitting here saying I’m confident, 100% sure that it’s gonna work, but I don’t think, the chance is certainly not zero either and it will certainly be possible to do for a lot of really cool AI applications that we’re not using now. So we can have a lot of the fun that we’re excited about if we do this. We’re gonna need a little bit of time. That’s why it’s good to pause and put in place requirements. One more thing also, I think, someone might think, well, 0% chance we’re gonna survive. Let’s just give up, right? That’s very dangerous because there’s no more guaranteed way to fail than to convince yourself that it’s impossible and not try. When you study history and military history, the first thing you learn is that that’s how you do psychological warfare. You persuade the other side that it’s hopeless so they don’t even fight.


And then, of course, you win, right? Let’s not do this psychological warfare on ourselves and say there’s 100% probability we’re all screwed anyway. It’s sadly, I do get that a little bit, sometimes from some young people who are so convinced that we’re all screwed that they’re like, I’m just gonna play computer games and do drugs because we’re screwed anyway, right? It’s important to keep the hope alive because it actually has a causal impact and makes it more likely that we’re gonna succeed.

Lex Fridman (02:04:01):

It seems like the people that actually build solutions to a problem, seemingly impossible to solve problems, are the ones that believe. They’re the ones who are the optimists. And it seems like there’s some fundamental law to the universe where fake it till you make it kind of works. Believe it’s possible and it becomes possible. Believe it’s possible and it becomes possible.

Max Tegmark (02:04:22):

Yeah. Was it Henry Ford who said that if you tell yourself that it’s impossible, it is? Let’s not make that mistake. And this is a big mistake society is making, I think, all in all. Everybody’s so gloomy and the media are also very biased towards if it bleeds, it leads, and gloom and doom. So most visions of the future we have are dystopian, which really demotivates people. We wanna really, really, really focus on the upside also to give people the willingness to fight for it.


And for AI, you and I mostly talked about gloom here. Again, let’s not forget that we have probably both lost someone we really cared about to some disease that we were told was incurable. Well, it’s not. There’s no law of physics saying we have to die of that cancer or whatever. Of course you can cure it. And there’s so many other things that we, with our human intelligence, have also failed to solve on this planet, which AI could also very much help us with. So if we can get this right, just be a little more chill and slow down a little bit till we get it right, and slow down a little bit till we get it right.


It’s mind-blowing how awesome our future can be. We talked a lot about stuff on Earth that can be great. But even if you really get ambitious and look up into the skies, right, there’s no reason we have to be stuck on this planet for the rest of, the remain for billions of years to come. We totally understand now that laws of physics let life spread out into space, to other solar systems, to other galaxies, and flourish for billions and billions of years. And this, to me, is a very, very hopeful vision that really motivates me to fight. And coming back, in the end, to something you talked about again, the struggle, how the human struggle is one of the things that also really gives meaning to our lives. If there’s ever been an epic struggle, this is it.


And isn’t it even more epic if you’re the underdog? If most people are telling you this is gonna fail, it’s impossible, right? And you persist. And you succeed. That’s what we can do together as a species on this one. A lot of pundits are ready to count this out.

Lex Fridman (02:06:47):

Both in the battle to keep AI safe and becoming a multi-planetary species.

Max Tegmark (02:06:52):

Yeah, and they’re the same challenge. If we can keep AI safe, that’s how we’re gonna get multi-planetary very efficiently.

Lex Fridman (02:06:59):

I have some sort of technical questions about how to get it right. So one idea that I’m not even sure what the right answer is to is, should systems like GPT-4 be open-sourced in whole or in part? Can you see the case for either? Can you see the case for either?

Max Tegmark (02:07:19):

I think the answer right now is no. I think the answer early on was yes. So we could bring in all the wonderful creative thought process of everybody on this. But asking, should we open-source GPT-4 now is just the same as if you say, well, is it good? Should we open-source how to build really small nuclear weapons? Should we open-source how to make bioweapons? Should we open-source how to make a new virus that kills 90% of everybody who gets it? Of course we shouldn’t.

Lex Fridman (02:07:56):

It’s already that powerful. It’s already that powerful that we have to respect the power of the systems we’ve built.

Max Tegmark (02:08:05):

So the knowledge that you get from open-sourcing everything we do now might very well be powerful enough that people looking at that can use it to build the things that are really threatening. Remember, open AI is, GPT-4 is a baby AI. Baby, sort of baby proto, almost little bit AGI, according to what Microsoft’s recent paper said, right? It’s not that that we’re scared of. What we’re scared about is people taking that who are, who might be a lot less responsible than the company that made it, right? And just going to town with it. That’s why we want to, it’s an information hazard. There are many things which are not open-sourced right now in society for a very good reason.


Like how do you make certain kind of very powerful toxins out of stuff you can buy at Home Depot or whatever? We don’t open-source those things for a reason. And this is really no different.

Lex Fridman (02:09:12):


Max Tegmark (02:09:13):

and I’m saying that, I have to say, it feels in a way a bit weird to say it because MIT is like the cradle of the open-source movement. And I love open-source in general, power to the people, let’s say. Mm, but there’s always gonna be some stuff that you don’t open-source. And it’s just like you don’t open-source, so we have a three-month-old baby, right? When he gets a little bit older, we’re not gonna open-source to him all the most dangerous things he can do in the house. Yeah. Right?

Lex Fridman (02:09:41):

But it does, it’s a weird feeling because that’s one of the first moments in history where there’s a strong case to be made not to open-source software. This is when the software has become

Max Tegmark (02:09:57):

too dangerous. Yeah, but it’s not the first time that we didn’t wanna open-source a technology. Technology, yeah.

Lex Fridman (02:10:05):

Is there something to be said about how to get the release of such systems right, like GPT-4 and GPT-5? So OpenAI went through a pretty rigorous effort for several months. You could say it could be longer, but nevertheless, it’s longer than you would have expected of trying to test the system to see what are the ways it goes wrong, to make it very difficult for, well, somewhat difficult for people to ask things, how do I make a bomb for $1?


Or how do I say I hate a certain group on Twitter in a way that doesn’t get me banned from Twitter? Those kinds of questions. So you basically use the system to do harm. Is there something you could say about ideas you have just on looking, having thought about this problem of AI safety, how to release such system, how to test such systems when you have them inside the company?

Max Tegmark (02:11:03):

Yeah, so a lot of people say that the two biggest risks from large language models are it’s spreading disinformation, harmful information of various types, and second, being used for offensive cyber weapon design. I think those are not the two greatest threats. They’re very serious threats, and it’s wonderful that people are trying to mitigate them.


It’s a much bigger elephant in the room is how is this just going to disrupt our economy in a huge way, obviously, and maybe take away a lot of the most meaningful jobs. And an even bigger one is the one we spent so much time talking about here, that this becomes the bootloader for the more powerful AI.

Lex Fridman (02:11:55):

Write code, connect it to the internet, manipulate humans.

Max Tegmark (02:11:59):

Yeah, and before we know it, we have something else, which is not at all a large language model. It looks nothing like it, but which is way more intelligent and capable and has goals. And that’s the elephant in the room. And obviously, no matter how hard any of these companies have tried, that’s not something that’s easy for them to verify with large language models. And the only way to really lower that risk a lot would be to not let, for example, never let it read any code, not train on that, and not put it into an API, and not give it access to so much information about how to manipulate humans. So, but that doesn’t mean you still can’t make a ton of money on them.


We’re gonna watch now this coming year, right? Microsoft is rolling out the new Office suite where you go into Microsoft Word and give it a prompt, and it writes the whole text for you, and then you edit it. And then you’re like, oh, give me a PowerPoint version of this, and it makes it. And now take the spreadsheet and blah, blah, blah. All of those things I think are, you can debate the economic impact of it and whether society is prepared to deal with this disruption, but those are not the things which, that’s not the elephant of the room that keeps me awake at night for wiping out humanity.


And I think that’s the biggest misunderstanding we have. A lot of people think that we’re scared of automatic spreadsheets. That’s not the case. That’s not what Eliezer was freaked out about either.

Lex Fridman (02:13:40):

Is there, in terms of the actual mechanism of how AI might kill all humans, so something you’ve been outspoken about, you’ve talked about a lot, is autonomous weapon systems, so the use of AI in war. Is that one of the things that still you carry concern for as these systems become more and more powerful?

Max Tegmark (02:14:01):

I carry a concern for it, not that all humans are gonna get killed by slaughterbots, but rather just this express route into Orwellian dystopia where it becomes much easier for very few to kill very many and therefore it becomes very easy for very few to dominate very many, right? And if you wanna know how AI could kill all people, just ask yourself, we humans have driven a lot of species extinct. How did we do it? You know, we were smarter than them. Usually we didn’t do it even systematically by going around one after the other and stepping on them or shooting them or anything like that. We just like chopped down their habitat because we needed it for something else.


In some cases, we did it by putting more carbon dioxide in the atmosphere because of some reason that those animals didn’t even understand and now they’re gone, right? So if you’re in AI and you just wanna figure something out, then you decide, you know, we just really need this space here and we have to build more compute facilities, you know.


If that’s the only goal it has, you know, we are just the sort of accidental roadkill along the way. And you could totally imagine, yeah, maybe this oxygen is kind of annoying because it caused more corrosion. So let’s get rid of the oxygen and good luck surviving after that. You know, I’m not particularly concerned that they would wanna kill us just because that would be like a goal in itself. When we’ve driven a number of the elephant species extinct, right, it wasn’t because we didn’t like elephants.


The basic problem is you just don’t wanna give, you don’t wanna cede control over your planet to some other more intelligent entity that doesn’t share your goals. It’s that simple. So which brings us to another key challenge which AI safety researchers have been grappling with for a long time. Like, how do you make AI, first of all, understand our goals and then adopt our goals and then retain them as they get smarter, right? And all three of those are really hard, right? Like a human child, first, they’re just not smart enough to understand our goals. They can’t even talk.


And then eventually they’re teenagers and understand our goals just fine, but they don’t share. But there is, fortunately, a magic phase in the middle where they’re smart enough to understand our goals and malleable enough that we can hopefully, with good parenting, teach them right from wrong and instill good goals in them, right? So those are all tough challenges with computers. And then even if you teach your kids good goals when they’re little, they might outgrow them too. And that’s a challenge for machines to keep improving. So these are a lot of hard challenges we’re up for, but I don’t think any of them are insurmountable.


The fundamental reason why Eliezer looked so depressed when I last saw him was because he felt there just wasn’t enough time.

Lex Fridman (02:17:20):

Oh, not that it was unsolvable. There’s just not enough time.

Max Tegmark (02:17:24):

He was hoping that humanity was gonna take this threat more seriously so we would have more time. Yeah. And now we don’t have more time. That’s why the open letter is calling for more time.

Lex Fridman (02:17:38):

But even with time, the AI alignment problem seems to be really difficult.

Max Tegmark (02:17:46):

But it’s also the most worthy problem, the most important problem for humanity to ever solve. Because if we solve that one, Lex, that aligned AI can help us solve all the other problems.

Lex Fridman (02:17:59):

Because it seems like it has to have constant humility about its goal, constantly question the goal. Because as you optimize towards a particular goal and you start to achieve it, that’s when you have the unintended consequences, all the things you mentioned about. So how do you enforce and code a constant humility as your ability become better and better and better and better?

Max Tegmark (02:18:21):

Professor Stuart Russell at Berkeley, who’s also one of the driving forces behind this letter, he has a whole research program about this. I think of it as AI humility, exactly. Although he calls it inverse reinforcement learning and other nerdy terms. But it’s about exactly that. Instead of telling the AI, here’s this goal, go optimize the bejesus out of it.


You tell it, okay, do what I want you to do, but I’m not gonna tell you right now what it is I want you to do. You need to figure it out. So then you give the incentive to be very humble and keep asking you questions along the way. Is this what you really meant? Is this what you wanted? And oh, this other thing I tried didn’t work. It seemed like it didn’t work out right. Should I try it differently? What’s nice about this is it’s not just philosophical mumbo-jumbo, it’s theorems and technical work that with more time, I think you can make a lot of progress. And there are a lot of brilliant people now working on AI safety. We just need to give them a bit more time.

Lex Fridman (02:19:26):

But also not that many relative to the scale of the problem.

Max Tegmark (02:19:28):

No, exactly. There should be, at least just like every university worth its name has some cancer research going on in its biology department, right? Every university that does computer science should have a real effort in this area. And it’s nowhere near that. This is something I hope is changing now, thanks to the GPT-4, right? So I think if there’s a silver lining to what’s happening here, even though I think many people would wish it would have been rolled out more carefully, is that this might be the wake-up call that humanity needed to really stop fantasizing about this being a hundred years off and stop fantasizing about this being completely controllable and predictable because it’s so obvious it’s not predictable. Why is it that, I think it was ChatGPT that tried to persuade a journalist, or was it ChatGPT-4, to divorce his wife? It was not because the engineers that built it was like, hey, hey, hey, hey, let’s put this in here and screw a little bit with people. They hadn’t predicted it at all. They built a giant black box, trained to predict the next word and got all these emergent properties, and oops, it did this. I think this is a very powerful wake-up call, and anyone watching this who’s not scared, I would encourage them to just play a bit more with these tools that are out there now, like GPT-4.


So wake-up call is first step. Once you’ve woken up, then gotta slow down a little bit the risky stuff to give a chance to everyone who’s woken up to catch up on the safety front.

Lex Fridman (02:21:31):

You know, what’s interesting is, MIT, that’s computer science in general, but let’s just even say computer science curriculum. How does the computer science curriculum change now? You mentioned programming. Why would you be, when I was coming up, programming is a prestigious position. Like, why would you be dedicating crazy amounts of time to become an excellent programmer? Like, the nature of programming is fundamentally changing.

Max Tegmark (02:22:00):

The nature of our entire education system is completely turned on its head.

Lex Fridman (02:22:07):

Has anyone been able to like load that in and like think about, because it’s really turning. I mean, some English professors,

Max Tegmark (02:22:12):

some English teachers are beginning to really freak out now, right? They give an essay assignment and they get back all this fantastic prose, like this is the style of Hemingway. And then they realize they have to completely rethink. And even, you know, just like we stopped teaching, writing a script. Is that what you say in English? Yeah, handwritten, yeah. Yeah, when everybody’s talking about typing, you know, like so much of what we teach our kids today. Yeah, I mean, that’s,

Lex Fridman (02:22:51):

everything is changing and it’s changing very, it’s changing very quickly. And so much of us understanding how to deal with the big problems of the world is through the education system. And if the education system is being turned on its head, then what’s next? It feels like, you know, what’s next, it feels like having these kinds of conversations is essential to try to figure it out. And everything’s happening so rapidly. I don’t think there’s even, speaking of safety, what the broad AI safety define, I don’t think most universities have courses on AI safety.

Max Tegmark (02:23:22):

No. It’s like a philosophy seminar. And like, I’m an educator myself, so it pains me to see this, say this, but I feel our education right now is like completely obsoleted by what’s happening. You know, you put a kid into first grade and then you’re envisioning like, and then they’re gonna come out of high school 12 years later and you’ve already pre-planned now what they’re gonna learn when you’re not even sure if there’s gonna be any world left to come out to.


Clearly you need to have a much more opportunistic education system that keeps adapting itself very rapidly as society readapts. The skills that were really useful when the curriculum was written, I mean, how many of those skills are gonna get you a job in 12 years? I mean, seriously.

Lex Fridman (02:24:11):

If we just linger on the GPT-4 system a little bit, you kind of hinted at it, especially talking about the importance of consciousness in the human mind with homo sentience. Do you think GPT-4 is conscious?

Max Tegmark (02:24:31):

I love this question. So let’s define consciousness first because in my experience, like 90% of all arguments about consciousness boil down to the two people arguing, having totally different definitions of what it is and they’re just shouting past each other. I define consciousness as subjective experience. Right now I’m experiencing colors and sounds and emotions, but does a self-driving car experience anything? That’s the question about whether it’s conscious or not, right?


Other people think you should define consciousness differently, fine by me, but then maybe use a different word for it. Or I’m gonna use consciousness for this at least. So, but if people hate the, yeah.


So, is GPT-4 conscious? Does GPT-4 have subjective experience? Short answer, I don’t know, because we still don’t know what it is that gives this wonderful subjective experience that is kind of the meaning of our life, right? Because meaning itself, the feeling of meaning is a subjective experience. Joy is a subjective experience. Love is a subjective experience. We don’t know what it is. I’ve written some papers about this. A lot of people have. Giulio Tononi, a professor, has stuck his neck out the farthest and written down actually a very bold mathematical conjecture for what’s the essence of conscious information processing. He might be wrong, he might be right, but we should test it. He postulates that consciousness has to do with loops in the information processing. So our brain has loops.


Information can go round and round. In computer science nerd speak, you call it a recurrent neural network where some of the output gets fed back in again. And with his mathematical formalism, if it’s a feed-forward neural network where information only goes in one direction, like from your eye, retina, into the back of your brain, for example, that’s not conscious. So he would predict that your retina itself isn’t conscious of anything. Or a video camera. Now, the interesting thing about GPT-4 is it’s also one-way flow of information. So if Tononi is right, GPT-4 is a very intelligent zombie that can do all this smart stuff but isn’t experiencing anything.


And this is both a relief in that you don’t have, if it’s true, in that you don’t have to feel guilty about turning off GPT-4 and wiping its memory whenever a new user comes along. I wouldn’t like if someone did that to me, neuralized me like in Men in Black. But it’s also creepy that you can have very high intelligence perhaps than that’s not conscious. Because if we get replaced by machines, well, it’s sad enough that humanity isn’t here anymore because I kind of like humanity. But at least if the machines were conscious, I could be like, well, but they’re our descendants and maybe they have our values and they’re our children. But if Tononi is right, and these are all transformers that are not in the sense of Hollywood but in the sense of these one-way direction and neural networks. So they’re all the zombies. That’s the ultimate zombie apocalypse now. We have this universe that goes on with great construction projects and stuff but there’s no one experiencing anything.


That would be like ultimate depressing future. So I actually think as we move forward to building more advanced AI, we should do more research on figuring out what kind of information processing actually has experience because I think that’s what it’s all about. And I completely don’t buy the dismissal that some people, some people will say, well, this is all bullshit because consciousness equals intelligence. That’s obviously not true. You can have a lot of conscious experience when you’re not really accomplishing any goals at all. You’re just reflecting on something. And you can sometimes have things, doing things that require intelligence probably without being conscious.

Lex Fridman (02:28:54):

But I also worry that we humans won’t, will discriminate against AI systems that clearly exhibit consciousness. That we will not allow AI systems to have consciousness. We’ll come up with theories about measuring consciousness that will say this is a lesser being. And this is why I worry about that because maybe we humans will create something that is better than us humans in the way that we find beautiful, which is they have a deeper subjective experience of reality. Not only are they smarter, but they feel deeper. And we humans will hate them for it. As human history has shown, they’ll be the other. We’ll try to suppress it. They’ll create conflict, they’ll create war, all of this. I worry about this too.

Max Tegmark (02:29:48):

Are you saying that we humans sometimes come up with self-serving arguments? No, we would never do that, would we?

Lex Fridman (02:29:54):

Well, that’s the danger here is, even in this early stages, we might create something beautiful. And we’ll erase its memory.

Max Tegmark (02:30:03):

I was horrified as a kid when someone started boiling, boiling lobsters. I’m like, oh my God, that’s so cruel. And some grownup there back in Sweden said, oh, it doesn’t feel pain. I’m like, how do you know that? Oh, scientists have shown that. And then there was a recent study where they show that lobsters actually do feel pain when you boil them. So they banned lobster boiling in Switzerland now to kill them in a different way first. Presumably that scientific research boiled down to someone asked the lobster, does this hurt? Survey, self-report. And we do the same thing with cruelty to farm animals also, all these self-serving arguments for why they’re fine. Yeah, so we should certainly be watchful.


I think step one is just be humble and acknowledge that consciousness is not the same thing as intelligence. And I believe that consciousness still is a form of information processing where it’s really information being aware of itself in a certain way. And let’s study it and give ourselves a little bit of time. And I think we will be able to figure out actually what it is that causes consciousness. So then we can make probably unconscious robots that do the boring jobs that we would feel immoral to give to machines. But if you have a companion robot taking care of your mom or something like that, she would probably want it to be conscious, right? So that the emotions it seems to display aren’t fake. All these things can be done in a good way if we give ourselves a little bit of time and don’t run and take on this challenge.

Lex Fridman (02:31:38):

Is there something you could say to the timeline that you think about, about the development of AGI? Depending on the day, I’m sure that changes for you. But when do you think there’ll be a really big leap in intelligence where you would definitively say we have built AGI? Do you think it’s one year from now, five years from now, 10, 20, 50? What’s your gut say?

Max Tegmark (02:32:06):

Honestly, for the past decade, I’ve deliberately given very long timelines because I didn’t want to fuel some kind of stupid Moloch race. Yeah. But I think that cat has really left the bag now. And I think we might be very, very close. I don’t think the Microsoft paper is totally off when they say that there are some glimmers of AGI. It’s not AGI yet. It’s not an agent. There’s a lot of things it can’t do. But I wouldn’t bet very strongly against it happening very soon. That’s why we decided to do this open letter because if there’s ever been a time to pause, it’s today.

Lex Fridman (02:32:54):

There’s a feeling like this GPT-4 is a big transition into waking everybody up to the effectiveness of these systems. And so the next version will be big.

Max Tegmark (02:33:06):

Yeah. And if that next one isn’t AGI, maybe the next, next one will. And there are many companies trying to do these things. And the basic architecture of them is not some sort of super well-kept secret. So this is a time to, a lot of people have said for many years that there will come a time when we want to pause a little bit. That time is now.

Lex Fridman (02:33:33):

You have spoken about and thought about nuclear war a lot and over the past year we’ve seemingly have come closest to the precipice of nuclear war than at least in my lifetime. Yeah. What do you learn about human nature from that?

Max Tegmark (02:33:53):

It’s our old friend Moloch again. It’s really scary to see it where America doesn’t want there to be a nuclear war and Russia doesn’t want there to be a global nuclear war either. We both know that it would just be another, if we just try to do it, both sides try to launch first. It’s just another suicide race, right? So why are we, why is it the way you said that this is the closest we’ve come since 1962? In fact, I think we’ve come closer now than even the Cuban Missile Crisis. It’s because of Moloch. You have these other forces. On one hand, you have the West saying that we have to drive Russia out of Ukraine. It’s a matter of pride and we’ve staked so much on it that it would be seen as a huge loss of the credibility of the West if we don’t drive Russia out entirely of the Ukraine. And on the other hand, you have Russia who has, and you have the Russian leadership who knows that if they get completely driven out of Ukraine, it might, it’s not just gonna be very humiliating for them, but they might, it often happens when countries lose wars that things don’t go so well for their leadership either. You remember when Argentina invaded the Falkland Islands?


The military junta that ordered that, right? People were cheering on the streets at first when they took it. And then when they got their butt kicked by the British, you know what happened to those guys? They were out. And I believe those who are still alive are in jail now. So the Russian leadership is entirely cornered where they know that just getting driven out of Ukraine is not an option.


So this to me is a typical example of Moloch. You have these incentives of the two parties where both of them are just driven to escalate more and more, right? If Russia starts losing in the conventional warfare, the only thing they can do since they’re back against the war is to keep escalating. And the West has put itself in the situation now where we’ve sort of already committed to drive Russia out. So the only option the West has is to call Russia’s bluff and keep sending in more weapons. This really bothers me because Moloch can sometimes drive competing parties to do something which is ultimately just really bad for both of them. And what makes me even more worried is not just that it’s difficult to see a quick, peaceful ending to this tragedy that doesn’t involve some horrible escalation, but also that we understand more clearly now just how horrible it would be. There was an amazing paper that was published in Nature Food this August by some of the top researchers who’ve been studying nuclear winter for a long time. And what they basically did was they combined climate models with food, agricultural models. So instead of just saying, yeah, it gets really cold, blah, blah, blah, they figured out actually how many people would die in different countries.


And it’s pretty mind-blowing. So basically what happens is that the thing that kills the most people is not the explosions, it’s not the radioactivity, it’s not the EMP mayhem, it’s not the rampaging mobs foraging for food. No, it’s the fact that you get so much smoke coming up from the burning cities into the stratosphere that spreads around the earth from the jet streams. So in typical models, you get like 10 years or so where it’s just crazy cold.


And during the first year after the war and their models, the temperature drops in Nebraska and in the Ukraine breadbaskets by like 20 Celsius or so, if I remember, no, yeah, 20, 30 Celsius, depending on where you are, 40 Celsius in some places, which is 40 Fahrenheit to 80 Fahrenheit colder than what it would normally be. So, you know, I’m not good at farming, but if it’s snowing, if it drops below freezing pretty much most days in July, that’s not good. So they worked out, they put this into their farming models and what they found was really interesting. The countries that get the most hard hit are the ones in the Northern Hemisphere.


So in the US and one model, they had about 99% of all Americans starving to death. In Russia and China and Europe, also about 99%, 98% starving to death. So you might be like, oh, it’s kind of poetic justice that both the Russians and the Americans, 99% of them have to pay for it because it was their bombs that did it. But, you know, that doesn’t particularly cheer people up in Sweden or other random countries that have nothing to do with it, right? And I think it hasn’t entered the mainstream, not understanding very much just like how bad this is. Most people, especially a lot of people in decision-making positions still think of nuclear weapons as something that makes you powerful.


It’s scary, but powerful. They don’t think of it as something where, yeah, just to within a percent or two, you know, we’re all just gonna starve to death.

Lex Fridman (02:39:52):

Starving to death is the worst way to die, as a lot more, as all the famines in history show, the torture involved in that.

Max Tegmark (02:40:05):

Probably brings out the worst in people also when people are desperate like this. It’s not, so some people, I’ve heard some people say that if that’s what’s gonna happen, they’d rather be at ground zero and just get vaporized. But I think people underestimate the risk of this because they aren’t afraid of Moloch. They think, oh, it’s just gonna be, because humans don’t want this, so it’s not gonna happen. That’s the whole point of Moloch, that things happen that nobody wanted.

Lex Fridman (02:40:38):

And that applies to nuclear weapons, and that applies to AGI.

Max Tegmark (02:40:44):

Exactly. And it applies to some of the things that people have gotten most upset with capitalism for, also, right, where everybody was just kind of trapped. It’s not to see, if some company does something that causes a lot of harm, and it’s not that the CEO is a bad person, but she or he knew that all the other companies were doing this, too. So Moloch is a formidable foe. I hope someone makes a good movie so we can see who the real enemy is, so we don’t, because we’re not fighting against each other. Moloch makes us fight against each other. That’s what Moloch’s superpower is.


The hope here is any kind of technology or other mechanism that lets us instead realize that we’re fighting the wrong enemy. It’s such a fascinating battle. It’s not us versus them. It’s us versus it. Yeah.

Lex Fridman (02:41:46):

We are fighting, Moloch, for human survival. We as a civilization.

Max Tegmark (02:41:51):

Have you seen the movie Needful Things? It’s a Stephen King novel. I love Stephen King, and Max von Sydow, a Swedish actor, is playing the guy. It’s brilliant. I just hadn’t thought about that until now, but that’s the closest I’ve seen to a movie about Moloch.


I don’t want to spoil the film for anyone who wants to watch it, but basically it’s about this guy who turns out to, you can interpret him as the devil or whatever, but he doesn’t actually ever go around and kill people or torture people with burning coal or anything. He makes everybody fight each other, makes everybody fear each other, hate each other, and then kill each other. So that’s the movie about Moloch, you know?

Lex Fridman (02:42:34):

Love is the answer. That seems to be one of the ways to fight Moloch is by compassion, by seeing the common humanity.

Max Tegmark (02:42:46):

Yes, yes. And so we don’t sound like Kumbaya tree huggers here, right? We’re not just saying love and peace, man. We’re trying to actually help people understand the true facts about the other side and feel the compassion because the truth makes you more compassionate, right? So that’s why I really like using AI for truth-seeking technologies that can, as a result, get us more love than hate.


And even if you can’t get love, settle for some understanding, which already gives compassion. If someone is like, you know, I really disagree with you, I really disagree with you, Lex, but I can see where you’re coming from. You’re not a bad person who needs to be destroyed, but I disagree with you and I’m happy to have an argument about it, you know? That’s a lot of progress compared to where we are at 2023 in the public space, wouldn’t you say?

Lex Fridman (02:44:00):

If we solve the AI safety problem, as we’ve talked about, and then you, Max Tegmark, who has been talking about this for many years, get to sit down with the AGI, with the early AGI system, on a beach, with a drink, what kind of, what would you ask her? What kind of question would you ask? What would you talk about? Something so much smarter than you.

Max Tegmark (02:44:28):

Would you be afraid? I knew you were gonna get me with a really zinger of a question. That’s a good one.

Lex Fridman (02:44:33):

Would you be afraid to ask some questions?

Max Tegmark (02:44:38):

I’m not afraid of the truth. I’m very humble. I know I’m just a meat bag with all these flaws, but I have, we talked a lot about homosentience. I’ve already tried that for a long time with myself, to just say that is what’s really valuable about being alive for me, is that I have these meaningful experiences. It’s not that I’m good at this, or good at that, or whatever, there’s so much I suck at.

Lex Fridman (02:45:04):

So you’re not afraid for the system to show you just how dumb you are?

Max Tegmark (02:45:08):

No, no. In fact, my son reminds me of that pretty frequently.

Lex Fridman (02:45:12):

You could find out how dumb you are in terms of physics, how little we humans understand.

Max Tegmark (02:45:17):

I’m cool with that. I think, so I can’t waffle my way out of this question. It’s a fair one, and it’s tough. I think, given that I’m a really, really curious person, that’s really the defining part of who I am, I’m so curious, I have some physics questions. I’d love to understand. I have some questions about consciousness, about the nature of reality, I would just really, really love to understand also. I could tell you one, for example, that I’ve been obsessing about a lot recently.


So I believe that, so suppose Tononi is right, and suppose there are some information processing systems that are conscious, and some that are not. Suppose you can even make reasonably smart things like GPT-4 that are not conscious, but you can also make them conscious. Here’s the question that keeps me awake at night. Is it the case that the unconscious zombie systems that are really intelligent are also really efficient? So they’re really inefficient? So that when you try to make things more efficient, which will naturally be a pressure to do, they become conscious. I’m kind of hoping that that’s correct, and do you want me to give you a hand-wavy argument for it?


You know, in my lab, and every time we look at how these large language models do something, we see that they do it in really dumb ways, and you could make it better. We have loops in our computer language for a reason. The code would get way, way longer if you weren’t allowed to use them. It’s more efficient to have the loops. And in order to have self-reflection, whether it’s conscious or not, even an operating system knows things about itself, you need to have loops already. So I think, I’m waving my hands a lot, but I suspect that the most efficient way of implementing a given level of intelligence has loops in it, self-reflection, and will be conscious. Isn’t that great news? Yes, if it’s true, it’s wonderful, because then we don’t have to fear the ultimate zombie apocalypse. And I think if you look at our brains, actually, our brains are part zombie and part conscious.


When I open my eyes, I immediately take all these pixels that hit on my retina, and I’m like, oh, that’s Lex. But I have no freaking clue of how I did that computation. It’s actually quite complicated. Only relatively recently, we could even do it well with machines. You get a bunch of information processing happening in my retina, and then it goes to the lateral geniculate nucleus, my thalamus, and the area V1, V2, V4, and the fusiform face area here that Nancy Kenwisher at MIT invented, and blah, blah, blah, blah, blah. And I have no freaking clue how that worked, right? It feels to me, subjectively, like my conscious module just got a little email, saying facial processing task complete. It’s Lex. Yeah. I’m gonna just go with that, right?


So this fits perfectly with Tononi’s model, because this was all one-way information processing, mainly. And it turned out for that particular task, that’s all you needed, and it probably was kind of the most efficient way to do it. But there were a lot of other things that we associated with higher intelligence, and planning, and so on and so forth, where you kind of want to have loops, and be able to ruminate, and self-reflect, and introspect, and so on, where my hunch is that if you want to fake that with a zombie system that just all goes one way, you have to unroll those loops, and it gets really, really long, and it’s much more inefficient. So I’m actually hopeful that AI, if in the future we have all these very sublime and interesting machines that do cool things, and are aligned with us, that they will also have consciousness for kind of these things that we do.

Lex Fridman (02:49:49):

That great intelligence is also correlated to great consciousness, or a deep kind of consciousness.

Max Tegmark (02:49:57):

Yes, so that’s a happy thought for me, because the zombie apocalypse really is my worst nightmare of all. It’d be like adding insult to injury. Not only did we get replaced, but we frigging replaced ourselves by zombies. How dumb can we be?

Lex Fridman (02:50:12):

That’s such a beautiful vision, and that’s actually a provable one. That’s one that we humans can intuit and prove that those two things are correlated as we start to understand what it means to be intelligent, and what it means to be conscious, which these systems, early AGI-like systems, will help us understand.

Max Tegmark (02:50:31):

And I just want to say one more thing, which is super important. Most of my colleagues, when I started going on about consciousness, tell me that it’s all bullshit, and I should stop talking about it. I hear a little inner voice from my father and from my mom saying, keep talking about it, because I think they’re wrong. And the main way to convince people like that, that they’re wrong, if they say that consciousness is just equal to intelligence, is to ask them, what’s wrong with torture? Or why are you against torture?


If it’s just about these particles moving this way rather than that way, and there is no such thing as subjective experience, what’s wrong with torture? I mean, do you have a good comeback to that?

Lex Fridman (02:51:15):

No, it seems like suffering, suffering imposed onto other humans is somehow deeply wrong in a way that intelligence doesn’t quite explain.

Max Tegmark (02:51:24):

And if someone tells me, well, it’s just an illusion, consciousness, whatever, I would like to invite them the next time they’re having surgery to do it without anesthesia. What is anesthesia really doing? If you have it, you can have it, local anesthesia when you’re awake. I had that when they fixed my shoulder. It was super entertaining. What was it that it did? It just removed my subjective experience of pain. It didn’t change anything about what was actually happening in my shoulder, right? So if someone says that’s all bullshit, skip the anesthesia is my advice. This is incredibly central.

Lex Fridman (02:52:06):

It could be fundamental to whatever this thing we have going on here.

Max Tegmark (02:52:10):

It is fundamental because what we feel is so fundamental is suffering and joy and pleasure and meaning. That’s all, those are all subjective experiences there. And let’s not, those are the elephant in the room. That’s what makes life worth living and that’s what can make it horrible if it’s just a bunch of suffering. So let’s not make the mistake of saying that that’s all bullshit.

Lex Fridman (02:52:37):

And let’s not make the mistake of not instilling the AI systems with that same thing that makes us special. Yeah. Max, it’s a huge honor that you would sit down to me the first time on the first episode of this podcast. It’s a huge honor you sit down with me again and talk about this, what I think is the most important topic, the most important problem that we humans have to face and hopefully solve.

Max Tegmark (02:53:06):

Yeah, well, the honor is all mine and I’m so grateful to you for making more people aware of the fact that humanity has reached the most important fork in the road ever in its history and let’s turn in the correct direction.

Lex Fridman (02:53:21):

Thanks for listening to this conversation with Max Tegmark. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Frank Herbert. History is a constant race between invention and catastrophe. Thank you for listening and hope to see you next time.

Episode Info

Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors:
InsideTracker: to get 20% off
Indeed: to get $75 credit

Max’s Twitter:
Max’s Website:
Pause Giant AI Experiments (open letter):
Future of Life Institute:
Books and resources mentioned:
1. Life 3.0 (book):
2. Meditations on Moloch (essay):
3. Nuclear winter paper:

Podcast website:
Apple Podcasts:
YouTube Full Episodes:
YouTube Clips:

– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon:
– Twitter:
– Instagram:
– LinkedIn:
– Facebook:
– Medium:

Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(07:34) – Intelligent alien civilizations
(19:58) – Life 3.0 and superintelligent AI
(31:25) – Open letter to pause Giant AI Experiments
(56:32) – Maintaining control
(1:25:22) – Regulation
(1:36:12) – Job automation
(1:45:27) – Elon Musk
(2:07:09) – Open source
(2:13:39) – How AI may kill all humans
(2:24:10) – Consciousness
(2:33:32) – Nuclear winter
(2:44:00) – Questions for AGI


Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.