Listen

Transcript

Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.

You can click the timestamp to jump to that time.

Lex Fridman (00:00):

The following is a conversation with Manolis Kellis, his fifth time on this podcast. He’s a professor at MIT and head of the MIT Computational Biology Group. He’s one of the greatest living scientists in the world, but he’s also a humble, kind, caring human being that I have the greatest of honors and pleasures of being able to call a friend. And now, a quick few second mention of each sponsor. Check them out in the description. It’s the best way to support this podcast. We’ve got Aidsleep for naps, NetSuite for business management software, ExpressVPN for privacy and security on the interwebs, and InsightTracker for biological data. Choose wisely, my friends. Also, if you want to work with our amazing team who are always hiring, go to luxfreedman.com slash hiring.

(00:54):

And now, onto the full ad reads. As always, no ads in the middle. I try to make this interesting. I often fail, but I try. And if you must skip them, please still check out our sponsors. I enjoy their stuff. Maybe you will, too. This episode is brought to you by Aidsleep and its new Pod 3 mattress, which I think of as a teleportation device into a land of dreams, a place where the mind goes to escape the space-time physics of this reality of the waking world. Anything is possible in the place of dreams. The darkness that lurks in the Jungian shadow is possible.

(01:36):

The hope, the triumph, symbolized as the light at the end of the tunnel is possible. All of it is possible. It’s all up to you. I’m kind of somebody that likes both the good and the bad of dreaming. There’s a cleansing aspect of a bad dream. You wake up freaking out a little bit, but then you realize how awesome this life is, that whatever happened in the dream world is not real.

(01:59):

It’s a kind of dress rehearsal for a bad event that happens in reality, but it doesn’t. It’s like in a video game. You get to save, do a dangerous thing, screw it up, and then you get to load and try again. That’s what a dream is. Anyway, I love dreaming. I love sleeping. I love naps, and an eight-sleep mattress is the best place to teleport into that dream world. Check them out and get special savings when you go to eightsleep.com slash flex.

(02:28):

This show is also brought to you by NetSuite, an all-in-one cloud business management system. It manages financials, human resources, inventory, e-commerce, and many business-related details, all things that I have to start figuring out. I put up a job position for somebody to help me with financials. All of it, it needs so much help because running a business, any kind of business, whether it’s a creative business or robotics factory or any kind of AI software company, anything you do has so many components, and I would say many of them don’t involve any of the kind of cutting-edge engineering and design and brainstorming and innovation and research and all that kind of stuff.

(03:12):

You have to do all the basic minutia, the glue that ties together people and makes the whole thing run, and I think you should use the best tool for that job, and NetSuite is something I can definitely recommend as a great tool. You can start now with no payment or interest for six months. Go to netsuite.com slash flex to access their one-of-a-kind financing program. That’s netsuite.com slash flex. This show is also brought to you by ExpressVPN.

(03:43):

My comrade, my friend, the piece of software that has accompanied me through darkness and light for many years, way before I had a podcast, way before I found my way. Though I am still forever lost, and you, if you too are forever lost, perhaps it will also warm your heart as it did mine. First of all, practically speaking, let’s put the romantic stuff aside, you should be using a VPN on the internet, and ExpressVPN is the VPN I’ve used and can highly, highly, highly recommend.

(04:15):

By the way, I apologize for the coarseness of my voice. I’ve been feeling a little bit under the weather, whatever the heck that expression actually means. There’s always Chad GPT that can ask the question, but I’m not going to. I’m just gonna go with it, gonna wing it. Gonna wing it is another funny expression, right?

(04:31):

Wing it, what does that mean? Probably has to do with birds, and the fact that bird flight is a kind of chaotic process that’s not amenable to clear dynamical system modeling that, for example, an airplane is. But let us return to the piece of software you should be using to warm your heart and to protect your privacy on the internet. Go to expressvpn.com slash lexpod for an extra three months free.

(05:01):

This show is also brought to you by InsideTracker, a service I use to track biological data that comes from my body. There’s that song, It’s My Party, and I’ll Cry If I Want To, and I used to think it said, it’s my body and I’ll cry if I want to. I don’t know what I thought that actually meant, but it’s a good song, it’s a silly song. It’s my party and I’ll cry if I want to. Cry if I want to, cry if I want to. You would cry too if it happened to you. Anyway, speaking of which, we’re going hard on the tangents today. I like data that comes from my body that is then used in machine learning algorithms to make decisions or recommendations of what I should do with said body. Lifestyle choices, diet, maybe in the future it’d be career advice, all kinds of stuff. Dating, friends, anything. But basically, health stuff, medicine, I think, and that’s the obvious way you should be figuring out what to do with your body, is at least in large part, based on the data that comes from your body. And not just once, but many times, over and over and over and over. InsightTracker are pioneering the data collection, sort of the blood test, that then can extract all kinds of information and give you advice. I highly recommend them. Get special savings for a limited time when you go to insighttracker.com slash Lex.

(06:19):

This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Manolis Kalas.

Manolis Kellis (06:45):

Good to see you, first of all, man. Lex, I’ve missed you. I think you’ve changed the lives of so many people that I know. And it’s truly such a pleasure to be back, such a pleasure to see you grow, to sort of reach so many different aspects of your own personality. Thank you for the love.

Lex Fridman (06:58):

You’ve always given me so much support and love. I just can’t, I’m forever grateful for that.

Manolis Kellis (07:02):

It’s lovely to see a fellow human being who has that love, who basically does not judge people. And there’s so many judgmental people out there, and it’s just so nice to see this beacon of openness.

Lex Fridman (07:13):

So what makes me one instantiation of human irreplaceable, do you think, as we enter this increasingly capable, age of increasingly capable AI, I have to ask, what do you think makes humans irreplaceable?

Manolis Kellis (07:26):

So humans are irreplaceable because of the baggage that we talked about. So we talked about baggage. We talked about the fact that every one of us has effectively relearned all of human civilization in their own way. So every single human has a unique set of genetic variants that they’ve inherited, some common, some rare, and some make us think differently, some make us have different personalities. They say that a parent with one child believes in genetics. A parent with multiple children understands genetics. Just how different kids are, and my three kids have dramatically different personalities ever since the beginning. So one thing that makes us unique is that every one of us has a different hardware.

(08:09):

The second thing that makes us unique is that every one of us has a different software, uploading of all of human society, all of human civilization, all of human knowledge. We’re not born knowing it. We’re not like, I don’t know, birds that learn how to make a nest through genetics and will make a nest even if they’ve never seen one. We are constantly relearning all of human civilization. So that’s the second thing. And the third one that actually makes humans very different from AI is that the baggage we carry is not experiential baggage. It’s also evolutionary baggage.

(08:42):

So we have evolved through rounds of complexity. So just like ogres have layers, and Shrek has layers, humans have layers. There’s the cognitive layer, which is sort of the outer most, the latest evolutionary innovation, this enormous neocortex that we have evolved. And then there’s the emotional baggage underneath that, and then there’s all of the fear and fright and flight and all of these kinds of behaviors.

(09:12):

So AI only has a neocortex. AI doesn’t have a limbic system. It doesn’t have this complexity of human emotions, which make us so, I think, beautifully complex, so beautifully intertwined with our emotions, with our instincts, with our sort of gut reactions and all of that. So I think when humans are trying to suppress that aspect, the sort of quote-unquote more human aspect towards a more cerebral aspect, I think we lose a lot of the creativity. We lose a lot of the freshness of humans.

Lex Fridman (09:49):

And I think that’s quite irreplaceable. So we can look at the entirety of people that are alive today and maybe all humans who have ever lived and map them in this high-dimensional space, and there’s probably a center of mass for that mapping, and a lot of us deviate in different directions. So the variety of directions in which we all deviate from that center is vast.

Manolis Kellis (10:12):

I would like to think that the center is actually empty, that basically humans are just so diverse from each other that there’s no such thing as an average human, that every one of us has some kind of complex baggage of emotions, intellectual, motivational, behavioral traits that it’s not just one sort of normal distribution we deviate from it. There’s so many dimensions that we’re kind of hitting the sort of sparseness, the curse of dimensionality, where it’s actually quite sparsely populated. And I don’t think you have an average human being.

Lex Fridman (10:49):

So what makes us unique in part is the diversity and the capacity for diversity. And the capacity of the diversity comes from the entire evolutionary history. So there’s just so many ways we can vary from each other.

Manolis Kellis (11:04):

Yeah, I would say not just the capacity but the inevitability of diversity. Basically, it’s in our hardware. We are wired differently from each other. My siblings and I are completely different. My kids from each other are completely different. My wife has, she’s like number two of six siblings. From a distance, they look the same, but then you get to know them. Every one of them is completely different.

Lex Fridman (11:28):

But sufficiently the same that the differences interplay with each other. So that’s the interesting thing, where the diversity is functional, it’s useful. So it’s like we’re close enough to where we notice the diversity and it doesn’t completely destroy the possibility of effective communication and interaction. So we’re still the same kind of thing.

Manolis Kellis (11:49):

So what I said in one of our earlier podcasts is that if humans realize that we’re 99.9% identical, we would basically stop fighting with each other. Like we are really one human species and we are so, so similar to each other. And if you look at the alternative, if you look at the next thing outside humans, like it’s been six million years that we haven’t had a relative. So it’s truly extraordinary that we’re kind of like this dot in outer space compared to the rest of life on earth.

Lex Fridman (12:23):

When you think about evolving through rounds of complexity, can you maybe elaborate such a beautiful phrase, beautiful thought that there’s layers of complexity that make.

Manolis Kellis (12:33):

So with software, sometimes you’re like, oh, let’s like build version two from scratch. But this doesn’t happen in evolution. In evolution, you layer in additional features on top of old features. So basically, like every single time my cells divide, I’m a yeast, like I’m a unicellular organism. And then cell division is basically identical. Every time I breathe in and my lungs expand, I’m basically, you know, like every time my heart beats, I’m a fish.

(13:06):

So basically that, I still have the same heart. Like very, very little has changed. The blood going through my veins, the oxygen, the, you know, our immune system, we’re basically primates. Our social behavior, we’re basically new world monkeys and old world monkeys. We’re basically this concept that every single one of these behaviors can be traced somewhere in evolution. And that all of that continues to live within us is also a testament to not just not killing other humans, for God’s sake, but like not killing other species either.

(13:41):

Like just to realize just how united we are with nature and that all of these biological processes have never ceased to exist. They’re continuing to live within us. And then just the neocortex and all of the reasoning capabilities of humans are built on top of all of these other species that continue to live, breathe, divide, metabolize, fight off pathogens, all continue inside us.

Lex Fridman (14:03):

So you think the neocortex, whatever reasoning is, that’s the latest feature in the latest version of this journey?

Manolis Kellis (14:11):

It’s extraordinary that humans have evolved so much in so little time. Again, if you look at the timeline of evolution, you basically have billions of years to even get to a dividing cell and then a multicellular organism and then a complex body plan. And then these incredible senses that we have for perceiving the world, the fact that bats can fly and the evolved flight, the evolved sonar in the span of a few million years. I mean, it’s just extraordinary how much evolution has kind of sped up. And all of that comes through this evolvability.

(14:49):

The fact that we took a while to get good at evolving and then once you get good at evolving, you can sort of, you have modularity built in, you have hierarchical organizations built in, you have all of these constructs that allow meaningful changes to occur without breaking the system completely. If you look at a traditional genetic algorithm the way that humans designed them in the 60s, you can only evolve so much. You can only evolve so much and as you evolve a certain amount of complexity, the number of mutations that move you away from something functional exponentially increases and the number of mutations that move you to something better exponentially decreases. So the probability of evolving something so complex becomes infinitesimally small as you get more complex.

(15:35):

But with evolution, it’s almost the opposite, almost the exact opposite. That it appears that it’s speeding up exactly as complexity is increasing. And I think that’s just the system getting good at evolving.

Lex Fridman (15:51):

Where do you think it’s all headed? Do you ever think about where, try to visualize the entirety of the evolutionary system and see if there’s an arrow to it and a destination to it?

Manolis Kellis (16:03):

So the best way to understand the future is to look at the past. If you look at the trajectory, then you can kind of learn something about the direction in which we’re heading. And if you look at the trajectory of life on Earth, it’s really about information processing. So the concept of the senses evolving one after the other, like bacteria are able to do chemotaxis. Basically means moving towards a chemical gradient. And that’s the first thing that you need to sort of hunt down food. The next step after that is being able to actually perceive light.

(16:36):

All life on this planet and all life that we know about evolved on this rotating rock. Every 24 hours, you get sunlight and dark, sunlight and dark. And light is a source of energy. Light is also information about where is up. Light is all kinds of things. So you can basically now start perceiving light. And then perceiving shapes beyond just the sort of single photoreceptor, you can now have complex eyes or multiple eyes and then start perceiving motion or perceiving direction, perceiving shapes. And then you start building infrastructure on the cognitive apparatus to start processing this information and making sense of the environment, building more complex models of the environment. So if you look at that trajectory of evolution, what we’re experiencing now and humans are basically according to this sort of information theoretic view of evolution, humans are basically the next natural step. And it’s perhaps no surprise that we became the dominant species of the planet. Because yes, there’s so many dimensions in which some animals are way better than we are, but at least on the cognitive dimension, we’re just simply unsurpassed on this planet and perhaps the universe. But the concept that if you now trace this forward, we talked a little bit about evolvability and how things get better at evolving. One possibility is that the next layer of evolution builds the next layer of evolution. And what we’re looking at now with humans and AI is that having mastered this information capability that humans have from this quote-unquote old hardware, this basically biological evolved system that kind of somehow in the environment of Africa and then in subsequent environments is sort of dispersing through the globe was evolutionarily advantageous. That has now created technology, which now has a capability of solving many of these cognitive tasks. It doesn’t have all the baggage of the previous evolutionary layers.

(18:48):

But maybe the next round of evolution on Earth is self-replicating AI, where we’re actually using our current smarts to build better programming languages and the programming languages to build chat GPT and that to then build the next layer of software that will then sort of help AI speed up. And it’s lovely that we’re coexisting with this AI, that sort of the creators of this next layer of evolution, this next stage, are still around to help guide it and hopefully will be for the rest of eternity as partners.

(19:22):

But it’s also nice to think about it as just simply the next stage of evolution, where you’ve kind of extracted away the biological needs. Like if you look at animals, most of them spend 80% of their waking hours hunting for food or building shelter. Humans, maybe 1% of that time. And then the rest is left to creative endeavors. And AI doesn’t have to worry about shelter, et cetera. So basically it’s all living in the cognitive space. So in a way, it might just be a very natural sort of next step to think about evolution. And that’s on the sort of purely cognitive side. If you now think about humans themselves, the ability to understand and comprehend our own genome, again, the ultimate layer of introspection, gives us now the ability to even mess with this hardware.

(20:09):

Not just augment our capabilities through interacting and collaborating with AI, but also perhaps understand the neural pathways that are necessary for empathetic thinking, for justice, for this and this and that, and sort of help augment human capabilities through neuronal interventions, through chemical interventions, through electrical interventions, to basically help steer the human bag of hardware that we kind of evolved with into greater capabilities.

(20:45):

And then ultimately, by understanding not just the wiring of neurons and the functioning of neurons, but even the genetic code, we could even at one point in the future start thinking about, well, can we get rid of psychiatric disease? Can we get rid of neurodegeneration? Can we get rid of dementia? And start perhaps even augmenting human capabilities, not just getting rid of disease.

Lex Fridman (21:12):

Can we tinker with the genome, with the hardware, or getting closer to the hardware without having to deeply understand the baggage? In the way we’ve disposed of the baggage in our software systems with AI, to some degree, not fully, but to some degree, can we do the same with the genome? Or is the genome deeply integrated into this baggage?

Manolis Kellis (21:36):

I wouldn’t want to get rid of the baggage, the baggage of what makes us awesome. So the fact that I’m sometimes angry and sometimes hungry and sometimes hangry is perhaps contributing to my creativity. I don’t want to be dispassionate. I don’t want to be another like, you know, robot. I, you know, I want to get in trouble and I want to sort of say the wrong thing. And I want to sort of, you know, make an awkward comment and sort of push myself into, you know, reactions and responses and things that can get just people thinking differently.

(22:12):

And I think our society is moving towards a humorless space where everybody’s so afraid to say the wrong thing that people kind of start quitting en masse and start like not liking their jobs and stuff like that. Maybe we should be kind of embracing that human aspect a little bit more in all of that baggage aspect and not necessarily thinking about replacing it. On the contrary, like embracing it and sort of this coexistence of the cognitive and the emotional hardwares.

Lex Fridman (22:41):

So embracing and celebrating the diversity that springs from the baggage versus kind of pushing towards and empowering this kind of pull towards conformity.

Manolis Kellis (22:58):

Yeah. And in fact, with the advent of AI, I would say, and these seemingly extremely intelligent systems that sort of can perform tasks that we thought of as extremely intelligent at the blink of an eye, this might democratize intellectual pursuits. Instead of just simply wanting the same type of brains that, you know, carry out specific ways of thinking, we can, like instead of just always only wanting, say, the mathematically extraordinary to go to the same universities, what you could simply say is like, who needs that anymore? You know, we now have AI. Maybe what we should really be thinking about is the diversity and the power that comes with the diversity where AI can do the math and then we should be getting a bunch of humans that sort of think extremely differently from each other and maybe that’s the true cradle of innovation.

Lex Fridman (23:57):

But AI can also, these large language models can also be with just a few prompts, essentially fine-tuned to be diverse from the center. So the prompts can really take you away into unique territory. You can ask the model to act in a certain way and it will start to act in that way. Is that possible that the language models could also have some of the magical diversity that makes it so damn interesting?

Manolis Kellis (24:24):

So I would say humans are the same way. So basically when you sort of prompt humans to basically, you know, in a given environment to act a particular way, they change their own behaviors. And, you know, the old saying is show me your friends and I’ll tell you who you are.

(24:45):

More like show me your friends and I’ll tell you who you’ll become. So it’s not necessarily that you choose friends that are like you, but I mean, that’s the first step. But then the second step is that, you know, the kind of behaviors that you find normal in your circles are the behaviors that you’ll start espousing. And that type of meta evolution where every action we take not only shapes our current action and the result of this action, but it also shapes our future actions by shaping the environment in which those future actions will be taken.

(25:17):

Every time you carry out a particular behavior, it’s not just a consequence for today, but it’s also a consequence for tomorrow because you’re reinforcing that neural pathway. So in a way, self-discipline is a self-fulfilling prophecy. And by behaving the way that you wanna behave and choosing people that are like you and sort of exhibiting those behaviors that are sort of desirable, you end up creating that environment as well.

Lex Fridman (25:47):

So it is a kind of, life itself is a kind of prompting mechanism, super complex. The friends you choose, the environments you choose, the way you modify the environment that you choose. Yes, but that seems like that process is much less efficient than a large language model. You can literally get a large language model through a couple of prompts to be a mix of Shakespeare and David Bowie, right? You can very aggressively change in a way that’s stable and convincing. You really transform through a couple of prompts the behavior of the model into something very different from the original.

Manolis Kellis (26:30):

So well before ChachiPT, I would tell my students, you know, what would Manoli say right now? And you guys all have a pretty good emulator of me right now. And I don’t know if you know the programming paradigm of the Robert Duckman, where you basically explain to the Robert Duckman that’s just sitting there exactly what you did with your code and why you have a bug. And just by the act of explaining, you’ll kind of figure it out. I woke up one morning from a dream where I was giving a lecture in this amphitheater and one of my friends was basically giving me some deep evolutionary insight on how cancer genomes and cancer cells evolve.

(27:13):

And I woke up with a very elaborate discussion that I was giving and a very elaborate set of insights that he had that I was projecting onto my friend in my sleep. And obviously, this was my dream. So my own neurons were capable of doing that, but they only did that under the prompt of, you are now Piyush Gupta. You are a professor in cancer genomics. You’re an expert in that field. What do you say? So I feel that we all have that inside us, that we have that capability of basically saying, I don’t know what the right thing is, but let me ask my virtual ex, what would you do? And virtual ex would say, be kind. I’m like, oh, yes.

(27:54):

Or something like that. And even though I myself might not be able to do it unprompted, and my favorite prompt is think step by step. And I’m like, you know, this also works on my 10-year-old. Right? When he tries to solve a math equation all in one step, I know exactly what mistake he’ll make.

(28:13):

But if I prompt it with, oh, please think step by step, then it sort of gets you in a mindset. And I think it’s also part of the way that Chachapiti was actually trained. This whole sort of human-in-the-loop reinforcement learning has probably reinforced these types of behaviors, whereby having this feedback loop, you kind of aligned AI better to the prompting opportunities by humans.

Lex Fridman (28:40):

Yeah, prompting human-like reasoning steps, the step-by-step kind of thinking. Yeah, but it does seem to be, I suppose it just puts a mirror to our own capabilities, and so we can be truly impressed by our own cognitive capabilities. Because the variety of what you can try, is we don’t usually have this kind of, we can’t play with our own mind rigorously through Python code, right? Yeah. So this allows us to really play with all of human wisdom and knowledge, or at least knowledge, at our fingertips, and then mess with that little mind that can think and speak in all kinds of ways.

Manolis Kellis (29:19):

What’s unique is that, as I mentioned earlier, every one of us was trained by a different subset of human culture, and Chachapiti was trained on all of it. Yeah. And the difference there is that it probably has the ability to emulate almost every one of us. Yeah. The fact that you can figure out where that is in cognitive behavioral space, just by a few prompts, is pretty impressive. But the fact that that exists somewhere, you know, absolutely beautiful. And the fact that it’s encoded in an orthogonal way from the knowledge, I think is also beautiful.

(29:58):

The fact that somehow, through this extreme overparameterization of AI models, it was able to somehow figure out that context, knowledge, and form are separable, and that you can sort of describe scientific knowledge in a haiku in the form of, I don’t know, Shakespeare or something, that tells you something about the decoupling and the decouplability of these types of aspects of human psyche.

Lex Fridman (30:25):

And that’s part of the science of this whole thing. So these large language models are, you know, days old, in terms of this kind of leap that they’ve taken. And it’ll be interesting to do this kind of analysis on them of the separation of context, form, and knowledge. Where exactly does that happen? There’s already sort of initial investigations, but it’s very hard to figure out where. Is there a particular set of parameters that are responsible for a particular piece of knowledge or a particular context or a particular style of speaking?

Manolis Kellis (30:57):

So with convolutional neural networks, interpretability had many good advances, because we can kind of understand them. There’s a structure to them. There’s a locality to them. And we can kind of understand the different layers have different sort of ranges that they’re looking at. So we can look at activation features and basically see where, you know, where does that correspond to. With large language models, it’s perhaps a little more complicated, but I think it’s still achievable, in the sense that we could kind of ask, well, what kind of prompts does this generate? If I sort of drop out this part of the network, then what happens?

(31:35):

And sort of start getting at a language to even describe these types of aspects of human behavioral psychology, if you wish, from the spoken part, in the language part. And the advantage of that is that it might actually teach us something about humans as well. Like, you know, we might not have words to describe these types of aspects right now, but when somebody speaks in a particular way, it might remind us of a friend that we know from here and there and there. And if we had better language for describing that, these concepts might become more apparent in our own human psyche. And then we might be able to encode them better in machines themselves.

Lex Fridman (32:12):

But probably you and I would have certain interests with the base model, what OpenACL is the base model, which is before the alignment, the reinforcement learning with human feedback, and before the AI safety-based kind of censorship of the model. It would be fascinating to explore, to investigate the ways that the model can generate hate speech, the kind of hate that humans are capable of. It would be fascinating. Or the kind of, of course, like sexual language, or the kind of romantic language, or all kinds of ideologies. Can I get it to be a communist? Can I get it to be a fascist? Can I get it to be a capitalist? Can I get it to be all these kinds of things? And see which parts get activated and not. Because it would be fascinating to sort of explore at the individual mind level and at a societal level, where do these ideas take hold?

(33:12):

What is the fundamental core of those ideas? Maybe the communism, fascism, capitalism, democracy are all actually connected by the fact that the human heart, the human mind is drawn to ideology, to a centralizing idea. And maybe we need a neural network to remind us of that.

Manolis Kellis (33:31):

I like the concept that the human mind is somehow tied to ideology. And I think that goes back to the promptability of Chachapiti, the fact that you can kind of say, well, think in this particular way now. And the fact that humans have invented words for encapsulating these types of behaviors. And it’s hard to know how much of that is innate and how much of that was passed on from language to language. But basically, if you look at the evolution of language, you can kind of see how young are these words in the history of language evolution that describe these types of behaviors, like kindness and anger and jealousy, et cetera. If these words are very similar from language to language, it might suggest that they’re very ancient. If they’re very different, it might suggest that this concept may have emerged independently in each different language and so on and so forth.

(34:28):

So looking at the phylogeny, the history, the evolutionary traces of language at the same time as people moving around that we can now trace thanks to genetics is a fascinating way of understanding the human psyche and also understanding how these types of behaviors emerge. And to go back to your idea about exploring the system unfiltered, I mean, in a way, psychiatric hospitals are full of those people. So basically, people whose mind is uncontrollable, who have kind of gone adrift in specific locations of their psyche.

(35:10):

And I do find this fascinating. Basically, watching movies that are trying to capture the essence of troubled minds I think is teaching us so much about our everyday selves.

(35:27):

Because many of us are able to sort of control our minds and are able to somehow hide these emotions. But every time I see somebody who’s troubled, I see versions of myself, maybe not as extreme, but I can sort of empathize with these behaviors. And I see bipolar, I see schizophrenia, I see depression, I see autism, I see so many different aspects that we kind of have names for and crystallize in specific individuals. And I think all of us have that. All of us have sort of just this multidimensional brain and genetic variations that push us in these directions, environmental exposures and traumas that push us in these directions, environmental behaviors that are reinforced by the kind of friends that we chose or friends that we were stuck with because of the environments that we grew up in. So in a way, a lot of these types of behaviors are within the vector span of every human. It’s just that the magnitude of those vectors is generally smaller for most people because they haven’t inherited that particular set of genetic variants or because they haven’t been exposed to those environments basically.

Lex Fridman (36:46):

Or something about the mechanism of reinforcement learning with human feedback didn’t quite work for them. So it’s fascinating to think about that’s what we do. We have this capacity to have all these psychiatric or behaviors associated with psychiatric disorders, but we, through the alignment process as we grow up with the parents, we kind of, we know how to suppress them.

Manolis Kellis (37:10):

We know how to control them. Every human that grows up in this world spends several decades being shaped into place. And without that, you know, maybe we would have the unfiltered chat GPT-4.

Lex Fridman (37:24):

Every baby is basically a raging narcissist.

Manolis Kellis (37:29):

Not all of them, not all of them, believe it or not. It’s remarkable. I remember watching my kids grow up, and again, yes, part of their personality has stayed the same, but also in different phases through their life, they’ve gone through these dramatically different types of behaviors. And my daughter basically saying, basically one kid saying, oh, I want the bigger piece. The other one saying, oh, everything must be exactly equal. And the third one saying, I’m okay. You know, I might have to have the smaller part. Don’t worry about me.

Lex Fridman (37:60):

Even in the early days, in the early days of development.

Manolis Kellis (38:02):

It’s just extraordinary to sort of see these dramatically different, like, I mean, my wife and I, you know, are very different from each other, but we also have, you know, six million variants, six million loci each, if you wish. If you just look at common variants, we also have a bunch of rare variants that are inherited in a more Mendelian fashion.

(38:23):

And now you have, you know, an infinite number of possibilities for each of the kids. So basically it’s two to the six million just from the common variants. And then if you like layer in the rare variants. So let me talk a little bit about common variants and rare variants. So if you look at just common variants, they’re generally weak effect because selection selects against strong effect variants. So if something like has a big risk for schizophrenia, it won’t rise to high frequency. So the ones that are common are by definition, by selection, only the ones that had relatively weak effect. And if all of the variants associated with personality, with cognition, and all aspects of human behavior were weak effect variants, then kids would basically be just averages of their parents.

(39:09):

If it was like thousands of loci, just by law of large numbers, the average of two large numbers would be, you know, very robustly close to that middle. But what we see is that kids are dramatically different from each other. So that basically means that in the context of that common variation, you basically have rare variants that are inherited in a more Mendelian fashion that basically then sort of govern likely many different aspects of human behavior, human biology, and human psychology. And that’s, again, like if you look at sort of a person with schizophrenia, their identical twin has only 50% chance of actually being diagnosed with schizophrenia. So that basically means there’s probably developmental exposures, environmental exposures, trauma, all kinds of other aspects that can shape that. And if you look at siblings, for the common variants, it kind of drops off exponentially, as you would expect, with sharing 50% of your genome, 25% of your genome, you know, 12.5% of your genome, et cetera, with more and more distant cousins. But the fact that siblings can differ so much in their personalities that we observe every day, it can’t all be nurture. Basically, again, as parents, we spend enormous amount of energy trying to fix, quote-unquote, the nurture part, trying to, you know, get them to share, get them to be kind, get them to be open, get them to trust each other, like, you know, like overcome the prisoner’s dilemma of, you know, if everyone fends for themselves, we’re all gonna live in a horrible place. But if we’re a little more altruistic, then we’re all gonna be in a better place. And I think it’s not like we treat our kids differently, but they’re just born differently. So in a way, as a geneticist, I have to admit that there’s only so much I can do with nurture, that nature definitely plays a big component.

Lex Fridman (41:01):

The selection of variants we have, the common variants and the rare variants, what can we say about the landscape of possibility they create? If you could just linger on that. So the selection of rare variants is defined how? How do we get the ones that we get? Is it just laden in that giant evolutionary baggage?

Manolis Kellis (41:29):

So I’m gonna talk about regression. Why do we call it regression? And the concept of regression to the mean, the fact that when fighter pilots in a dogfight did amazingly well, they would give them rewards. And then the next time they’re in dogfight, they would do worse. So then the Navy basically realized that, wow, or at least interpreted that as, wow, we’re ruining them by praising them, and then they’re gonna perform worse. The statistical interpretation of that is regression to the mean. The fact that you’re an extraordinary pilot, you’ve been trained in an extraordinary fashion.

(42:09):

That pushes your mean further and further to extraordinary achievement. And then in some dogfights, you’ll just do extraordinarily well. The probability that the next one will be just as good is almost nil, because this is the peak of your performance. And just by statistical odds, the next one will be another sample from the same underlying distribution, which is gonna be a little closer to the mean. So regression analysis takes its name from this type of realization in the statistical world.

(42:44):

Now, if you now take humans, you basically have people who have achieved extraordinary achievements. Einstein, for example. You know, you would call him, for example, the epitome of human intellect. Does that mean that all of his children and grandchildren will be extraordinary geniuses?

(43:04):

It probably means that they’re sampled from the same underlying distribution, but he was probably a rare combination of extremes in addition to these common variants. So you can basically interpret your kids’ variation, for example, as, well, of course, they’re gonna be some kind of sampled from the average of the parents with some kind of deviation according to the specific combination of rare variants that they have inherited. So, you know, given all that, the possibilities are endless as to sort of where you should be, but you should always interpret that with, well, it’s probably an alignment of nature and nurture.

(43:46):

And the nature has both the common variants that are acting kind of like the law of large numbers and the rare variants that are acting more in a Mendelian fashion. And then you layer in the nurture, which, again, in everyday action we make, we shape our future environment. But the genetics we inherit are shaping the future environment of not only us, but also our children. So there’s this weird nature-nurture interplay in self-reinforcement where you’re kind of shaping your own environment, but you’re also shaping the environment of your kids. And your kids are gonna be born in the context of your environment that you’ve shaped, but also with a bag of genetic variants that they have inherited.

(44:27):

And there’s just so much complexity associated with that. When we start blaming something on nature, it might just be nurture. It might just be that, well, yes, they inherited the genes from the parents, but they also were shaped by the same environment. So it’s very, very hard to untangle the two. And you should always realize that nature can influence nurture, nurture can influence nature, or at least be correlated with and predictive of, and so on and so forth.

Lex Fridman (44:53):

So I love thinking about that distribution that you mentioned, and here’s where I can be my usual ridiculous self. And I sometimes think about that army of sperm cells, however many hundreds of thousands there are. And I kind of think of all the possibilities there, because there’s a lot of variation, and one gets to win. Is that- It’s not a random one. Is it a totally ridiculous way to think about- No, not at all.

Manolis Kellis (45:23):

Yeah. So I would say evolutionarily, we are a very slow evolving species. Basically, the generations of humans are a terrible way to do selection. What you need is processes that allow you to do selection in a smaller, tighter loop. Yeah. And part of what, if you look at our immune system, for example, it evolves at a much faster pace than humans evolve, because there is actually an evolutionary process that happens within our immune cells. As they’re dividing, there’s basically VDJ recombination that basically creates this extraordinary wealth of antibodies and antigens against the environment.

(46:06):

And basically, all these antibodies are now recognizing all these antigens from the environment, and they send signals back that cause these cells that recognize the non-self to multiply. So that basically means that even though viruses evolve at millions of times faster than we are, we can still have a component of ourselves which is environmentally facing, which is sort of evolving at not the same scale, but very rapid pace.

(46:35):

Sperm expresses perhaps the most proteins of any cell in the body. And part of the thought is that this might just be a way to check that the sperm is intact. In other words, if you waited until that human has a liver and starts eating solid food, and sort of filtrates away, or kidneys, or stomach, et cetera, basically, if you waited until these mutations manifest, late, late in life, then you would end up not failing fast, and you would end up with a lot of failed pregnancies, and a lot of later onset psychiatric illnesses, et cetera.

(47:19):

If instead, you basically express all of these genes at the sperm level, and if they misform, they basically cause the sperm to cripple, then you have, at least on the male side, the ability to exclude some of those mutations. And on the female side, as the egg develops, there’s probably a similar process where you could sort of weed out eggs that are just not carrying beneficial mutations, or at least that are carrying highly detrimental mutations. So you can basically think of the evolutionary process in a nested loop, basically, where there’s an inner loop where you get many, many more iterations to run, and then there’s an outer loop that moves at a much slower pace, and going back to the next step of evolution, of possibly designing systems that we can use to sort of complement our own biology, or to sort of eradicate disease, and you name it, or at least mitigate some of the, I don’t know, psychiatric illnesses, neurodegenerative disorders, et cetera. You can basically, and also, you know, metabolic, immune, cancer, you name it, but simply engineering these mutations from rational design might be very inefficient. If instead you have an evolutionary loop where you’re kind of growing neurons on a dish, and you’re exploring evolutionary space, and you’re sort of shaping that one protein to be better adapt at sort of, I don’t know, recognizing light, or communicating with other neurons, et cetera, you can basically have a smaller evolutionary loop that you can run thousands of times faster than the speed it would take to evolve humans for another million years. So I think it’s important to think about sort of this evolvability as a set of nested structures that allow you to sort of test many more combinations, but in a more fixed setting.

Lex Fridman (49:07):

Yeah, that’s fascinating that the mechanism there is for sperm to express proteins, to create a testing ground early on, so that the failed designs don’t make it.

Manolis Kellis (49:20):

Yeah, I mean, in design of engineering systems, fail fast is one of the principles you learn. Like, basically, you assert something. Why do you assert that? Because if that something ain’t right, you better crash now than sort of let it crash at an unexpected time. And in a way, you can think of it as like 20,000 assert functions. Assert protein can fold, assert protein can fold. And if any of them fail, that sperm is gone.

Lex Fridman (49:44):

Well, I just like the fact that I’m the winning sperm.

Manolis Kellis (49:46):

I’m the result of the winner, winning, hashtag winning. My wife always plays me this French song that actually sings about that. It’s like, you know, remember, in life, we were all the first one time. So at least one time, you were the first.

Lex Fridman (50:03):

I should mention, as a brief tangent, back to the place where we came from, which is the base model that I mentioned for OpenAI, which is before the reinforcement learning with human feedback. And you kind of give this metaphor of it being kind of like a psychiatric hospital.

Manolis Kellis (50:18):

I like that, because it’s basically all of these different angles at once. Like, you basically have the more extreme versions of human psyche.

Lex Fridman (50:26):

So the interesting thing is, I’ve talked with folks in OpenAI quite a lot, and they say it’s extremely difficult to work with that model. Yeah, kind of like it’s extremely difficult to work with some humans. The parallels there are very interesting, because once you run the alignment process, it’s much easier to interact with it. But it makes you wonder what the capacity, what the underlying capability of the human psyche is, as in the same way that what is the underlying capability of a large language model.

Manolis Kellis (50:55):

And remember earlier, when I was basically saying that part of the reason why it’s so prompt, malleable, is because of that alignment problem? Did that alignment work? It’s kind of nice that the engineers at OpenAI have the same interpretation that, you know, in fact, it is that. And this whole concept of easier to work with, I wish that we could work with more diverse humans.

(51:27):

In a way, and sort of that’s one of the possibilities that I see with the advent of these large language models. The fact that it gives us the chance to both dial down friends of ours that we can’t interpret or that are just too edgy to sort of really truly interact with, where you could have a real-time translator. Just the same way that you can translate English to Japanese or Chinese or Korean by like real-time adaptation. You could basically suddenly have a conversation with your favorite extremist on either side of the spectrum and just dial them down a little bit.

Lex Fridman (52:07):

And of course, not you and I, but you could have friends who’s a complete asshole, but it’s a different base level. So you can actually tune it down to like, okay, they’re not actually being an asshole there. They’re actually expressing love right now. It’s just that this is a- They have their way of doing that. And they probably live in New York if we’re just to pick a random location.

Manolis Kellis (52:30):

So yeah, so you can basically layer out contexts. You can basically say, ooh, let me change New York to Texas and let me change extreme left to extreme right or somewhere in the middle or something. And I also like the concept of being able to listen to the information without being dissuaded by the emotions. In other words, everything humans say has an intonation, has some kind of background that they’re coming from. It reflects the way that they’re thinking of you, reflects the impression that they have of you. And all of these things are intertwined.

(53:13):

But being able to disconnect them, being able to sort of- Self-improvement is one of the things that I’m constantly working on. And being able to receive criticism from people who really hate you is difficult because it’s layered in with that hatred. But deep down, there’s something that they say that actually makes sense. Or people who love you might layer it in a way that doesn’t come through. But if you’re able to sort of disconnect that emotional component from the sort of self-improvement and basically when somebody says, whoa, that was a bunch of bullshit, did you ever do the control, this and this and that, you could just say, oh, thanks for the very interesting presentation. I’m wondering, what about that control? Then suddenly you’re like, oh yeah, of course, I’m gonna run that control, that’s a great idea. Instead of that was a bunch of BS, you’re like, argh, you’re sort of hitting on the brakes and you’re trying to push back against that.

(54:12):

So any kind of criticism that comes after that is very difficult to interpret in a positive way because it helps reinforce the negative assessment of your work. When in fact, if we disconnected the technical component from the negative assessment, then you’re embracing the negative, then you’re embracing the technical component, you’re gonna fix it. Whereas if it’s coupled with, and if that thing is real and I’m right about your mistake, then it’s a bunch of BS, then suddenly you’re like, you’re gonna try to prove that that mistake does not exist.

Lex Fridman (54:45):

Yeah, it’s fascinating to carry the information. I mean, this is what you’re essentially able to do here is you carry the information in the rich complexity that information contains. So it’s not actually dumbing it down in some way. It’s still expressing it, but taking off.

Manolis Kellis (54:59):

But you can die the emotional.

Lex Fridman (55:02):

Emotion side, which is probably so powerful for the internet or for social networks.

Manolis Kellis (55:08):

Again, when it comes to understanding each other, like for example, I don’t know what it’s like to go through life with a different skin color. I don’t know how people will perceive me. I don’t know how people will respond to me. We don’t often have that experience. But in a virtual reality environment or in a sort of AI interactive system, you could basically say, okay, now make me Chinese or make me South African or make me, you know, a Nigerian, you can change the accent. You can change layers of that contextual information and then see how the information is interpreted. And you can rehear yourself through a different angle.

(55:49):

You can hear others. You can have others react to you from a different package. And then hopefully we can sort of build empathy by learning to disconnect all of these social cues that we get from like how a person is dressed. You know, if they’re wearing a hoodie or if they’re wearing a shirt or if they’re wearing a, you know, jacket, you get very different emotional responses that, you know, I wish we could overcome as humans. And perhaps large language models and augmented reality and deep fakes can kind of help us overcome all that.

Lex Fridman (56:28):

In what way do you think these large language models and the thing they give birth to in the AI space will change this human experience, the human condition? The things we’ve talked across many podcasts about that makes life so damn interesting and rich, love, fear, fear of death, all of it. If we could just begin kind of thinking about how does it change for the good and the bad, the human condition.

Manolis Kellis (57:02):

Human society is extremely complicated. We have come from a hunter-gatherer society to an agricultural and farming society where the goal of most professions was to eat and to survive. And with the advent of agriculture, the ability to live together in societies, humans could suddenly be valued for different skills.

(57:33):

If you don’t know how to hunt, if you don’t know how to hunt, but you’re an amazing potter, then you fit in society very well because you can sort of make your pottery and you can barter it for rabbits that somebody else caught. And the person who hunts the rabbits doesn’t need to make pots because you’re making all the pots. And that specialization of humans is what shaped modern society.

(58:00):

And with the advent of currencies and governments and credit cards and Bitcoin, you basically now have the ability to exchange value for the kind of productivity that you have. So basically I make things that are desirable to others, I can sell them and buy back food, shelter, et cetera. With AI, the concept of I am my profession might need to be revised because I defined my profession in the first place as something that humanity needed that I was uniquely capable of delivering. But the moment we have AI systems able to deliver these goods, for example, writing a piece of software or making a self-driving car or interpreting the human genome, then that frees up more of human time for other pursuits. Human time for other pursuits. These could be pursuits that are still valuable to society. I could basically be 10 times more productive at interpreting genomes and do a lot more.

(59:10):

Or I could basically say, oh, great, the interpreting genome’s part of my job. Now it only takes me 5% of the time instead of 60% of the time. So now I can do more creative things. I can explore not new career options, but maybe new directions for my research lab. I can be more productive, contribute more to society. And if you look at this giant pyramid that we have built on top of the subsistence economy, what fraction of U.S. jobs are going to feeding all of the U.S.? Less than 2%. Basically, the gain in productivity is such that 98% of the economy is beyond just feeding ourselves. And that basically means that we kind of have built these system of interdependencies of needed or useful or valued goods that sort of make the economy run, that the vast majority of wealth goes to other what we now call needs, but used to be wants. So basically, I want to fly a drone. I want to buy a bicycle. I want to buy a nice car. I want to have a nice home. I want to, et cetera, et cetera, et cetera.

(01:00:24):

So, and then sort of, what is my direct contribution to my eating? I mean, I’m doing research on the human genome. I mean, this will help humans. It will help all of humanity. But how is that helping the person who’s giving me poultry or vegetables? So in a way, I see AI as perhaps leading to a dramatic rethinking of human society. If you think about sort of the economy being based on intellectual goods that I’m producing, what if AI can produce a lot of these intellectual goods and satisfies that need? Does that now free humans for more artistic expression, for more emotional maturing, for basically having a better work-life balance, being able to show up for your two hours of work a day or two hours of work like three times a week with like immense rest and preparation and exercise and you’re sort of clearing your mind and suddenly you have these two amazingly creative hours. You basically show up at the office as your AI is busy answering your phone call, making all your meetings, revising all your papers, et cetera. And then you show up for those creative hours and you’re like, all right, autopilot, I’m on.

(01:01:34):

And then you can basically do so, so much more that you would perhaps otherwise never get to because you’re so overwhelmed with these mundane aspects of your job. So I feel that AI can truly transform the human condition from realizing that we don’t have jobs anymore. We now have vocations. And there’s this beautiful analogy of three people laying bricks and somebody comes over and asks the first one, what are you doing? He’s like, oh, I’m laying bricks. Second one, what are you doing? I’m building a wall. And the third one, what are you doing? I’m building this beautiful cathedral. So in a way, the first one has a job, the last one has a vocation.

(01:02:17):

And if you ask me, what are you doing? Oh, I’m editing a paper, then I have a job. What are you doing? I’m understanding human disease circuitry. I have a vocation. So in a way, being able to allow us to enjoy more of our vocation by taking away, offloading some of the job part of our daily activities.

Lex Fridman (01:02:39):

So we all become the builders of cathedrals. Correct. Yeah, and we follow intellectual pursuits, artistic pursuits. I wonder how that really changes at a scale of several billion people, everybody playing in the space of ideas, in the space of creations.

Manolis Kellis (01:02:59):

So ideas, maybe for some of us, maybe you and I are in the job of ideas, but other people are in the job of experiences. Other people are in the job of emotions, of dancing, of creative, artistic expression, of skydiving, and you name it. So basically, these, again, the beauty of human diversity is exactly that, that what rocks my boat might be very different from what rocks other people’s boat. And what I’m trying to say is that maybe AI will allow humans to truly, like, not just look for, but find meaning. And sort of, you don’t need to work, but you need to keep your brain at ease. And the way that your brain will be at ease is by dancing and creating these amazing movements, or creating these amazing paintings, or creating, I don’t know, something that sort of changes, that touches at least one person out there that sort of shapes humanity through that process. And instead of working your mundane programming job, where you hate your boss, and you hate your job, and you say you hate that darn program, et cetera, you’re like, well, I don’t need that. I can offload that, and I can now explore something that will actually be more beneficial to humanity, because the mundane parts can be offloaded.

Lex Fridman (01:04:23):

I wonder if it localizes our, all the things you’ve mentioned, all the vocations. So you mentioned that you and I might be playing in the space of ideas, but there’s two ways to play in the space of ideas, both of which we’re currently engaging in. So one is the communication of that to other people. It could be a classroom full of students, but it could be a podcast. It could be something that’s shown on YouTube, and so on. Or it could be just the act of sitting alone and playing with ideas in your head, or maybe with a loved one, having a conversation that nobody gets to see. The experience of just sort of looking up at the sky and wondering different things, maybe quoting some philosophers from the past, and playing with those little ideas, and that little exchange is forgotten forever, but you got to experience it. And maybe, I wonder if it localizes that exchange of ideas so that with AI, it’ll become less and less valuable to communicate with a large group of people, that you will live life intimately and richly just with that circle of meat bags that you seem to love.

Manolis Kellis (01:05:36):

So the first is, even if you’re alone in a forest, having this amazing thought, when you exit that forest, the baggage that you carry has been shifted, has been altered by that thought. When I bike to work in the morning, I listen to books.

(01:05:55):

And I’m alone, no one else is there. I’m having that experience by myself. And yet, in the evening when I speak with someone, an idea that was formed there could come back. Sometimes when I fall asleep, I fall asleep listening to a book. And in the morning, I’ll be full of ideas that I never even processed consciously. I’ll process them unconsciously. And they will shape that baggage that I carry. That will then shape my interactions, and again, affect ultimately all of humanity in some butterfly effect minute kind of way. So that’s one aspect. The second aspect is gatherings.

(01:06:33):

So basically, you and I are having a conversation which feels very private, but we’re sharing with the world. And then later tonight, you’re coming over, and we’re having a conversation that will be very public with dozens of other people, but we will not share with the world. So in a way, which one’s more private? The one here or the one there? Here, there’s just two of us, but a lot of others listening. There, a lot of people speaking and thinking together and bouncing off each other.

(01:07:03):

And maybe that will then impact your millions of, you know, audience through your next conversation. And I think that’s part of the beauty of humanity, the fact that no matter how small, how alone, how broadcast immediately or later on something is, it still percolates through the human psyche.

Lex Fridman (01:07:27):

Human gatherings. All throughout human history, there’s been gatherings. I wonder how those gatherings have impacted the direction of human civilization. Just thinking of in the early days of the Nazi party, it was a small collection of people gathering. And the kernel of an idea, in that case an evil idea, gave birth to something that actually had a transformative impact on all of human civilization. And then there’s similar kind of gatherings that lead to positive transformations. This is probably a good moment to ask you on a bit of a tangent, but you mentioned it. You put together salons with gatherings, small human gatherings, with folks from MIT, Harvard here in Boston, friends, colleagues. What’s your vision behind that?

Manolis Kellis (01:08:22):

So it’s not just MIT people and it’s not just Harvard people. We have artists, we have musicians, we have painters, we have dancers, we have cinematographers, we have so many different diverse folks. And the goal is exactly that, celebrate humanity. What is humanity? Humanity is the all of us. It’s not the any one subset of us.

(01:08:48):

And we live in such an amazing, extraordinary moment in time where you can sort of bring people from such diverse professions all living under the same city. You know, we live in an extraordinary city where you can have extraordinary people who have gathered here from all over the world. So my father grew up in a village in an island in Greece that didn’t even have a high school. To go get a high school education, he had to move away from his home. My mother grew up in another small island in Greece.

(01:09:19):

They did not have this environment that I am now creating for my children. My parents were not academics. They didn’t have these gatherings. So I feel that, like I feel so privileged as an immigrant to basically be able to offer to my children the nurture that my ancestors did not have. So Greece was under Turkish occupation until 1821. My dad’s island was liberating in 1920. So like they were under Turkish occupation for hundreds of years. These people did not know what it’s like to be Greek, let alone go to an elite university or be surrounded by these extraordinary humans.

(01:10:09):

The way that I’m thinking about these gatherings is that I’m shaping my own environment and I’m shaping the environment that my children get to grow up in. So I can give them all my love, I can give them all my parenting, but I can also give them an environment as immigrants that sort of we feel welcome here. That, I mean, my wife grew up in a farm in rural France. Her father was a farmer, her mother was a school teacher. Like for me and for my wife to be able to host these extraordinary individuals that we feel so privileged, so humbled by is amazing. And I think it’s celebrating the welcoming nature of America. The fact that it doesn’t matter where you grew up. And many, many of our friends at these gatherings are immigrants themselves. I grew up in Pakistan, in all kinds of places around the world that are now able to sort of gather in one roof as human to human. No one is judging you for your background, for the color of your skin, for your profession. It’s just everyone gets to raise their hands and ask ideas.

Lex Fridman (01:11:18):

So a celebration of humanity and a kind of gratitude for having traveled quite a long way to get here.

Manolis Kellis (01:11:26):

And if you look at the diversity of topics as well, I mean, we had a school teacher present on teaching immigrants, a book called Making Americans. We had a presidential advisor to four different presidents come and talk about the changing of US politics. We had a musician, a composer from Italy who lives in Australia come and present his latest piece and fundraise. We had painters come and sort of show their art and talk about it. We’ve had authors of books on leadership.

(01:12:05):

We’ve had intellectuals like Steven Pinker and it’s just extraordinary that the breadth and this crowd basically loves not just the diversity of the audience, but also the diversity of the topics. And the last few were with Scott Aronson on AI and alignment and all of that. So a bunch of beautiful weirdos. Exactly.

Lex Fridman (01:12:30):

And beautiful human beings. All of the outcasts in one room. And just like you said, basically every human is a kind of outcast in this sparse distribution far away from the center. But it’s not recorded. It’s just a small human gathering. Just for the moment. In this world that seeks to record so much, it’s powerful to get so many interesting humans together and not record.

Manolis Kellis (01:12:59):

It’s not recorded, but it percolates.

Lex Fridman (01:13:03):

It’s recorded in the minds of the people. It shapes everyone’s mind. So allow me to please return to the human condition and one of the nice features of the human condition is love. Do you think humans will fall in love with AI systems and maybe they with us? So that aspect of the human condition, do you think that will be affected?

Manolis Kellis (01:13:28):

So in Greece, there’s many, many words for love. And some of them mean friendship. Some of them mean passionate love. Some of them mean fraternal love, et cetera. So I think AI doesn’t have the baggage that we do. And it doesn’t have all of the subcortical regions that we kind of started with before we evolved all of the cognitive aspects. So I would say AI is faking it when it comes to love. But when it comes to friendship, when it comes to being able to be your therapist, your coach, your motivator, someone who synthesizes stuff for you, who writes for you, who interprets a complex passage, who compacts down a very long lecture or a very long text, I think that friendship will definitely be there. Like the fact that I can have my companion, my partner, my AI who has grown to know me well, and that I can trust with all of the darkest parts of myself, all of my flaws, all of the stuff that I only talk about to my friends and basically say, listen, you know, here’s all this stuff that I’m struggling with.

(01:14:44):

Someone who will not judge me, who will always be there to better me. In some ways, not having the baggage might make for your best friend, for your confidant that can truly help reshape you. So I do believe that human-AI relationships will absolutely be there, but not the passion, more the mentoring.

Lex Fridman (01:15:08):

What’s this, a really interesting thought. To play devil’s advocate, if those AI systems are locked in, in faking the baggage, who are you to say that the AI systems that begs you not to leave it, doesn’t love you, who are you to say that this AI system that writes poetry to you, that is afraid of death, afraid of life without you, or vice versa, creates the kind of drama that humans create, the power dynamics that can exist in a relationship. What AI system that is abusive one day and romantic the other day, all the different variations of relationships, and it’s consistently that it holds the full richness of a particular personality. Why is that not a system you can love in a romantic way? Why is it faking it if it sure as hell seems real?

Manolis Kellis (01:16:08):

There’s many answers to this. The first is it’s only the eye of the beholder. Who tells me that I’m not faking it either? Maybe all of these subcortical systems that make me sort of have different emotions, maybe they don’t really matter. Maybe all that matters is the neocortex, and that’s where all of my emotions are encoded, and the rest is just bells and whistles. That’s one possibility. And therefore, you know, who am I to judge that is faking it when maybe I’m faking it as well? The second is neither of us is faking it. Maybe it’s just an emergent behavior of these neocortical systems that is truly capturing the same exact essence of love and hatred and dependency and sort of, you know, reverse psychology and that we have. So it is possible that it’s simply an emergent behavior and that we don’t have to encode these additional architectures, that all we need is more parameters, and some of these parameters can be all of the personality traits. A third option is that just by telling me, oh, look, now I’ve built an emotional component to AI. It has a limbic system, it has a lizard brain, et cetera.

(01:17:30):

And suddenly I’ll say, oh, cool, it has the capability of emotion. So now when it exhibits the exact same unchanged behaviors that it does without it, I, as the beholder, will be able to sort of attribute to it emotional attributes that I would to another human being and therefore have that mental model of that other person. So again, I think a lot of relationships is about the mental models that you project on the other person and that they’re projecting on you. And then, yeah, then in that respect, I do think that even without the embodied intelligence part, without having ever experienced what it’s like to be heartbroken, the sort of guttural feeling of misery, that that system, I could still attribute it traits of human feelings and emotions.

Lex Fridman (01:18:33):

And in the interaction with that system, something like love emerges. So it’s possible that love is not a thing that exists in your mind, but a thing that exists in the interaction of the different mental models you have of other people’s minds or other person’s mind. And so, you know, it doesn’t, as long as one of the entities, let’s just take the easy case, one of the entities is human and the other is AI, it feels very natural that from the perspective of at least the human, there is a real love there. And then the question is, how does that transform human society? If it’s possible that, which I believe will be the case, I don’t know what to make of it, but I believe that’ll be the case, where there’s hundreds of millions of romantic partnerships between humans and AIs, what does that mean for society?

Manolis Kellis (01:19:29):

If you look at longevity, and if you look at happiness, and if you look at late life, you know, well-being, the love of another human is one of the strongest indicators of health into long life. And I have many, many countless stories where as soon as the romantic partner of 60-plus years of a person dies, within three, four months, the other person dies, just like losing their love.

(01:19:59):

I think the concept of being able to satisfy that emotional need that humans have, even just as a mental health sort of service, to me, you know, that’s a very good society.

(01:20:12):

It doesn’t matter if your love is wasted, quote-unquote, on a machine, it is, you know, the placebo, if you wish, that makes the patient better anyway. Like, there’s nothing behind it, but just the feeling that you’re being loved will probably engender all of the emotional attributes of that. The other story that I wanna say in this whole concept of faking, and maybe I’m a terrible dad, but I was asking my kids, I was asking my kids, I’m like, does it matter if I’m a good dad, or does it matter if I act like a good dad?

(01:20:48):

In other words, if I give you love and shelter and kindness and warmth and all of the above, you know, does it matter that I’m a good dad? Conversely, if I, deep down, love you to the end of eternity, but I’m always gone, which dad would you rather have? The cold, ruthless killer that will show you only love and warmth and nourish you and nurture you, or the amazingly warm-hearted but works five jobs and you never see them?

Lex Fridman (01:21:23):

And what’s the answer? I mean, from the perspective- I don’t know the answer. I think you’re a romantic, so you say it matters what’s on the inside, but pragmatically speaking,

Manolis Kellis (01:21:32):

why does it matter? The fact that I’m even asking the question basically says it’s not enough to love my kids. I better freaking be there to show them that I’m there. So basically, of course, you know, everyone’s a good guy in their story. So in my story, I’m a good dad. But if I’m not there, it’s wasted. So the reason why I ask the question is for me to say, you know, does it really matter that I love them if I’m not there to show it?

Lex Fridman (01:21:60):

But it’s also possible that what reality is is the you showing it, that what you feel on the inside is little narratives and games you play inside your mind that doesn’t really matter. That the thing that truly matters is how you act. And that AI systems can quote-unquote fake. And that, if it’s all that matters, is actually real but not fake.

Manolis Kellis (01:22:26):

Again, let there be no doubt, I love my kids to pieces. But my worry is, am I being a good enough dad? And what does that mean? Like if I’m only there to do their homework and make sure that they do all the stuff, but I don’t show it to them, then might as well be a terrible dad. But I agree with you that if the AI system can basically play the role of a father figure for many children that don’t have one, or the role of parents, or the role of siblings, if a child grows up alone, maybe their emotional state will be very different than if they grow up with an AI sibling.

Lex Fridman (01:23:06):

Well, let me ask, I mean, this is for your kids, for just loved ones in general, let’s go to the trivial case of just texting back and forth. What if we create a large language model, fine-tuned on Manolis? And while you’re at work, it’ll replace, every once in a while, you’ll just activate the auto Manolis. And it’ll text them exactly in your way. Is that cheating? I can’t wait. I mean, it’s the same guy.

Manolis Kellis (01:23:42):

I cannot wait, seriously.

Lex Fridman (01:23:44):

But wait, wouldn’t that have a big impact on you emotionally? Because now I’m replaceable.

Manolis Kellis (01:23:50):

I love that. No, seriously, I would love that. I would love to be replaced. I would love to be replaceable. I would love to have a digital twin that we don’t have to wait for me to die or to disappear in a plane crash or something to replace me. Like, I’d love that model to be constantly learning, constantly evolving, adapting with every one of my changing, growing self.

(01:24:15):

As I’m growing, I want that AI to grow. And I think this will be extraordinary, number one, when I’m giving advice, being able to be there for more than one person. Why does someone need to be at MIT to get advice from me? Like, people in India could download it. And so many students contact me from across the world who wanna come and spend the summer with me. I wish they could do that, all of them. Like, we don’t have room for all of them, but I wish I could do that to all of them. And that aspect is the democratization of relationships.

(01:24:51):

I think that that is extremely beneficial. The other aspect is I want to interact with that system. I want to look inside the hood. I want to sort of evaluate it. I want to basically see if, when I see it from the outside, the emotional parameters are off or the cognitive parameters are off or the set of ideas that I’m giving are not quite right anymore. I wanna see how that system evolves. I wanna see the impact of exercise or sleep on sort of my own cognitive system.

(01:25:27):

I wanna be able to sort of decompose my own behavior in a set of parameters that I can evaluate and look at my own personal growth. I’d love to sort of, at the end of the day, have my model say, well, you know, you didn’t quite do well today. Like, you know, you weren’t quite there. And sort of grow from that experience. And I think the concept of basically being able to become more aware of our own personalities, become more aware of our own identities, maybe even interact with ourselves and sort of hear how we are being perceived, I think would be immensely helpful in self-growth, in self-actualization, self-enthusiation.

Lex Fridman (01:26:07):

The experiments I would do on that thing, because one of the challenges, of course, is you might not like what you see in your interaction, and you might say, well, the model’s not accurate. But then you have to probably consider the possibility that the model is accurate and that there’s actually flaws in your mind. I would definitely prod and see how many biases I have with different kinds. I don’t know. And I would, of course, go to the extremes.

(01:26:35):

I would go, like, how jealous can I make this thing? Like, at which stages does it get super jealous? You know, or at which stages does it get angry? Can I, like, provoke it? Can I get it, like, completely? Like, what are your triggers? But not only triggers, can I get it to go, like, lose its mind? Like, go completely nuts? Just don’t exercise for a few days. That’s basically it, yes. I mean, that’s an interesting way to prod yourself, almost like a self-therapy session.

Manolis Kellis (01:27:05):

And the beauty of such a model is that if I am replaceable, if the parts that I currently do are replaceable, that’s amazing because it frees me up to work on other parts that I don’t currently have time to develop. Maybe all I’m doing is giving the same advice over and over and over again. Like, just let my AI do that.

(01:27:26):

And I can work on the next stage and the next stage and the next stage. So I think in terms of freeing up, like, they say a programmer is someone who cannot do the same thing twice. So it’s not the second time you write a program to do it. And I wish I could do that for my own existence. I could just, like, you know, figure out things, keep improving, improving, improving. And once I’ve nailed it, let the AI loose on that. And maybe even let the AI better it, better than I could have.

Lex Fridman (01:27:52):

But doesn’t the concept of, you said, me and I can work on new things, but doesn’t that break down? Because you said digital twin, but there’s no reason it can’t be millions of digital Minolsa’s. Aren’t you lost in the sea of Minolsa’s? The original is hardly the original. It’s just one of millions.

Manolis Kellis (01:28:19):

I wanna have the room to grow. Maybe the new version of me, that the actual me will get slightly worse sometimes, slightly better other times. When it gets slightly better, I’d like to emulate that and have a much higher standard to meet and keep going.

Lex Fridman (01:28:36):

But does it make you sad that your loved ones, the physical, real loved ones, might kinda, like, start cheating on you with the other Minolsa’s?

Manolis Kellis (01:28:46):

I wanna be there 100% of them for each of them. So I have zero perks, or zero quirms about me being physically me. Like, zero jealousy.

Lex Fridman (01:28:60):

Wait a minute, but isn’t that like, don’t we hold on to that? Isn’t that why we’re afraid of death? We don’t wanna lose this thing we have going on. Isn’t that an ego death? When there’s a bunch of other Minolsa’s, you get to look at them. They’re not you. They’re just very good copies of you. They get to live a life. The, I mean, it’s fear of missing out. It’s FOMO. They get to have interactions, and you don’t get to have those interactions.

Manolis Kellis (01:29:28):

There’s two aspects of every person’s life. There’s what you give to others, and there’s what you experience yourself. Life truly ends when you experiencing ends. But the others experiencing you doesn’t need to end.

Lex Fridman (01:29:51):

But your experience, you could still, I guess you’re saying the digital twin does not limit your ability to truly experience.

Manolis Kellis (01:29:59):

To experience as a human being. The downside is when my wife or my kids will have a really emotional interaction with my digital twin, and I won’t know about it. So I will show up, and they now have the baggage, but I don’t.

(01:30:16):

So basically, what makes interactions between humans unique in this sharing and exchanging kind of way is the fact that we are both shaped by every one of our interactions. I think the model of the digital twin works for dissemination of knowledge, of advice, et cetera, where I want to have wise people give me advice across history. I want to have chats with Gandhi, but Gandhi won’t necessarily learn from me, but I will learn from him. So in a way, the dissemination and the democratization rather than the building of relationships.

Lex Fridman (01:30:51):

So the emotional aspect, so there should be an alert when the AI system is interacting with your loved ones, and all of a sudden it starts getting emotionally fulfilling, like a magical moment. There should be, okay, stop, AI system freezes. There’s an alert on your phone. You need to take over.

Manolis Kellis (01:31:10):

Yeah, yeah, I take over, and then whoever I was speaking with can have the AI,

Lex Fridman (01:31:15):

or one of the AIs. This is such a tricky thing to get right. I mean, it’s still, I mean, there’s going to go wrong in so many interesting ways that we’re gonna have to learn as a society. Yeah, yeah. That in the process of trying to automate our tasks and having a digital twin, you know, for me personally, if I can have a relatively good copy of myself, I would set it to start answering emails, but I would set it to start tweeting. I would like to replace.

Manolis Kellis (01:31:44):

It gets better. What if that one is actually way better than you?

Lex Fridman (01:31:47):

Yeah, exactly. Then you’re like. Well, I wouldn’t want that because. Why? Because then I would never be able to live up to, like, what if the people that love me start loving that thing, and then I already fall short.

Manolis Kellis (01:32:03):

Be falling short even more. So listen, I’m a professor. The stuff that I give to the world is the stuff that I teach, but much more importantly, sorry, number one, the stuff that I teach, number two, the discoveries that we make in my research group, but much more importantly, the people that I train.

(01:32:20):

They are now out there in the world teaching others. If you look at my own trainees, they are extraordinarily successful professors. So Anshul Kundaji at Stanford, Alex Stark at IMP in Vienna, Jason Ernst at UCLA, Andreas Penning at CMU. Each of them, I’m like, wow, they’re better than I am, and I love that. So maybe your role will be to train better versions of yourself, and they will be your legacy. Not you doing everything, but you training much better version of Lex Friedman than you are, and then they go off to do their mission, which is in many ways what this mentorship model of academia does.

Lex Fridman (01:33:03):

But the legacy is ephemeral. It doesn’t really live anywhere. The legacy, it’s not like written somewhere. It just lives through them.

Manolis Kellis (01:33:12):

But you can continue improving, and you can continue making even better versions of you.

Lex Fridman (01:33:16):

Yeah, but they’ll do better than me at creating new versions. It’s awesome, but it’s, you know, there’s a ego that says there’s a value to an individual, and it feels like this process decreases the value of the individual, this meat bag. All right, if there’s good digital copies of people, then there’s more flourishing of human thought and ideas and experiences, but there’s less value to the individual human.

Manolis Kellis (01:33:46):

I don’t have any such limitations. I basically, I don’t have that feeling at all. Like, I remember one of our interviews, I was basically saying, you know, the meaning of life you had asked me, and I was like, I came back, and I was like, I felt useful today, and I was at my maximum. I was, you know, like 100%, and I gave good ideas, and I was a good person, I was a good advisor, I was a good husband, a good father. That was a great day, because I was useful. And if I can be useful to more people by having a digital twin, I will be liberated, because my urge to be useful will be satisfied.

(01:34:25):

Doesn’t matter whether it’s direct me or indirect me, whether it’s my students that I’ve trained, my AI that I’ve trained. I think there’s a sense that my mission in life is being accomplished, and I can work on my self-growth.

Lex Fridman (01:34:41):

I mean, that’s a very Zen state. That’s why people love you. It’s a Zen state you’ve achieved. But do you think most of humanity would be able to achieve that kind of thing? People really hold on to the value of their own ego, that it’s not just being useful. Being useful is nice as long as it builds up this reputation and that meatbag is known as being useful, therefore it has more value. People really don’t wanna let go of that ego thing.

Manolis Kellis (01:35:08):

One of the books that I reprogrammed my brain with at night was called Ego is the Enemy. Ego is the Enemy. Ego is the Enemy. And basically being able to just let go. My advisor used to say, you can accomplish anything as long as you don’t seek to get credit for it.

Lex Fridman (01:35:26):

Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha. That’s beautiful to hear, especially from a person who’s existing in academia. You’re right, the legacy lives through the people you mentor. It’s the actions, it’s the outcome. What about the fear of death? How does this change it?

Manolis Kellis (01:35:41):

Again, to me, death is when I stop experiencing. And I never want that to stop. I want to live forever. As I said last time, every day, the same day forever. Or one day every 10 years forever. Any of the forevers, I’ll take it.

Lex Fridman (01:35:59):

So you wanna keep getting the experiences, the new experiences.

Manolis Kellis (01:36:01):

Gosh. Gosh, it is so fulfilling. Just the self-growth, the learning, the growing, the comprehending. It’s addictive, it’s a drug. Just the drug of intellectual stimulation, the drug of growth, the drug of knowledge. It’s a drug.

Lex Fridman (01:36:23):

But then there’ll be thousands or millions, monoliths that live on after your biological system is no longer. More power to them. Ha ha ha ha. Do you think that, and quite realistically, it does mean that interesting people, such as yourself, live on? In the, you know, if I can interact with the fake monoliths, those interactions live on in my mind. So.

Manolis Kellis (01:36:50):

Makes sense. About 10 years ago, I started recording every single meeting that I had. Every single meeting. We just start, either the voice recorder at the time, or now a Zoom meeting. And I record, my students record, every single one of our conversations recorded. And I always joke that the ultimate goal is to create virtual me and just get rid of me, basically. Not get rid of him, but don’t have the need for me anymore. Another goal is to be able to go back and say, how have I changed from five years ago?

(01:37:23):

Was I different? Was I giving advice in a different way? Was I giving different types of advice? Has my philosophy about how to write papers or how to present data or anything like that changed?

(01:37:38):

I, you know, in academia and in mentoring, a lot of the interaction is my knowledge and my perception of the world goes to my students, but a lot of it is also in the opposite direction. Like the other day, I had a conversation with one of my postdocs, and I was like, hmm, I think, you know, let me give you an advice. You could do this. And then she said, well, I’ve thought about it. And then I’ve decided to do that instead. And we talked about it for a few minutes. And then at the end, I’m like, you know, I’ve just grown a little bit today. Thank you. Like she convinced me that my advice was incorrect. She could have just said, yeah, sounds great, and just not do it. But by constantly teaching my students and teaching my mentees that I’m here to grow, she felt empowered to say, here’s my reasons why I will not follow that advice. And again, part of me growing is saying, whoa, I just understood your reasons. I think I was wrong. And now I’ve grown from it. And that’s what I wanna do. That, you know, I wanna constantly keep growing in this sort of bi-directional advice.

Lex Fridman (01:38:49):

I wonder if you can capture the trajectory of that to where the AI could also map forward, project forward the trajectory after you’re no longer there, how the different ways you might evolve.

Manolis Kellis (01:39:03):

So again, we’re discussing a lot about these large language models, and we’re sort of projecting these cognitive states of ourselves on them. But I think on the AI front, a lot more needs to happen. So basically right now, it’s these large language models, and we believe that within their parameters, we’re encoding these types of things. And, you know, in some aspects, it might be true. It might be truly emergent intelligence that’s coming out of that. In other aspects, I think we have a ways to go. So basically to make all of these dreams that we’re sort of discussing come reality, we basically need a lot more reasoning components, a lot more sort of logic, causality, models of the world. And I think all of these things will need to be there in order to achieve what we’re discussing. And we need more explicit representations of these knowledge, more explicit understanding of these parameters. And I think the direction in which things are going right now is absolutely making that possible by sort of enabling, you know, chat GPT and GPT-4 to sort of search the web and, you know, plug and play modules and all of these sort of components.

(01:40:19):

In Marvin Minsky’s The Society of Mind, you know, he truly thinks of the human brain as a society of different kind of capabilities. And right now, a simple, a single such model might actually not capture that. And I sort of truly believe that by sort of this side-by-side understanding of neuroscience and sort of new neural architectures, that we still have several breakthroughs.

(01:40:49):

I mean, the transformer model was one of them, the attention sort of aspect, the, you know, memory component, all of these, you know, the representation learning, the pretext training of being able to sort of predict the next word or predict the missing part of the image. And the only way to predict that is to sort of truly have a model of the world. I think those have been transformative paradigms. But I think going forward, when you think about AI research, what you really want is perhaps more inspired by the brain, perhaps more that is just orthogonal to sort of how human brains work, but sort of more of these types of components.

Lex Fridman (01:41:35):

Well, I think it’s also possibly there’s something about us that in different ways could be expressed. You know, Noam Chomsky, you know, he wants to, you know, we can’t have intelligence unless we really understand deeply language, the linguistic underpinnings of reasoning.

(01:41:52):

But these models seem to start building a deep understanding of stuff. Because what does it mean to understand? Because if you keep talking to the thing and it seems to show understanding, that’s understanding. It doesn’t need to present to you a schematic of, look, this is all I understand. You can just keep prodding it with prompts and it seems to really.

Manolis Kellis (01:42:18):

And you can go back to the human brain and basically look at places where there’s been accidents. For example, the corpus callosum of some individuals, you know, can be damaged. And then the two hemispheres don’t talk to each other. So you can close one eye and give instructions that half the brain will interpret, but not be able to sort of project to the other half. And you could basically say, you know, go grab me a beer from the fridge. And then, you know, they go to the fridge and they grab the beer and they come back and they’re like, hey, why did you go there? Oh, I was thirsty. Turns out they’re not thirsty. They’re just making a model of reality. They’re, basically you can think of the brain as the employee that’s like afraid to do wrong or afraid to be caught not knowing what the instructions were where our own brain makes stories about the world to make sense of the world. And we can become a little more self-aware by being more explicit about what’s leading to these interpretations. So one of the things that I do is every time I wake up, I record my dream. I just voice record my dream.

(01:43:27):

And sometimes I only remember the last scene, but it’s an extremely complex scene with a lot of architectural elements, a lot of people, et cetera. And I will start narrating this. And as I’m narrating it, I will remember other parts of the dream. And then more and more, I’ll be able to sort of retrieve from my subconscious. And what I’m doing while narrating is also narrating why I had this dream. I’m like, oh, and this is probably related to this conversation that I had yesterday, or this is probably related to the worry that I have about something that I have later today, et cetera. So in a way, I’m forcing myself to be more explicit about my own subconscious. And I kind of like the concept of self-awareness in a very sort of brutal, transparent kind of way. It’s not like, oh, my dreams are coming from outer space and they mean all kinds of things. Like, no, here’s the reason why I’m having these dreams. And very often, I’m able to do that. I have a few recurrent locations, a few recurrent architectural elements that I’ve never seen in the real life, but that are sort of truly there in my dream and that I can sort of vividly remember across many dreams. I’m like, ooh, I remember that place again that I’ve gone to before, et cetera. And it’s not just deja vu. Like, I have recordings of previous dreams where I’ve described these places.

Lex Fridman (01:44:34):

That’s so interesting. These places, however much detail you could describe them in, you can place them onto a sheet of paper through introspection, through this self-awareness that it comes all from this particular machine. That’s exactly right, yeah.

Manolis Kellis (01:44:56):

And I love that about being alive, like the fact that I’m not only experiencing the world, but I’m also experiencing how I’m experiencing the world. Sort of a lot of this introspection,

Lex Fridman (01:45:07):

a lot of this self-growth. I love this dance we’re having. You know, the language models, at least GPT 3.5 and 4, seem to be able to do that too. Yeah, yeah. You seem to explore different kinds of things about what, you know, you could actually have a discussion with it of the kind, why did you just say that? Yeah, exactly. And it starts to wonder, yeah, why did I just say that? Yeah, you’re right, I was wrong. I was wrong, it doesn’t, and then there’s this weird kind of losing yourself in the confusion of your mind, and it, of course, it might be anthropomorphizing, but there’s a feeling like, almost of a melancholy feeling of like, oh, I don’t have it all figured out. Almost like losing your, you’re supposed to be a knowledgeable, a perfectly fact-based, knowledgeable language model, and yet you fall short.

Manolis Kellis (01:45:59):

So, human self-cautiousness, in my view, may have a reason through building mental models of others. This whole fight or fright kind of thing that basically says, I interpret this person as about to attack me or I can trust this person, et cetera. And we constantly have to build models of other people’s intentions, and that ability to encapsulate intent and to build a mental model of another entity is probably evolutionarily extremely advantageous, because then you can sort of have meaningful interactions, you can sort of avoid being killed and being taken advantage of, et cetera. And once you have the ability to make models of others, it might be a small evolutionary leap to start making models of yourself. So now you have a model for how others function, and now you can kind of, as you grow, have some kind of introspection of, hmm, maybe that’s the reason why I’m functioning the way that I’m functioning.

(01:47:04):

And maybe what ChachiPT is doing is, in order to be able to, again, predict the next word, it needs to have a model of the world. So it has created now a model of the world, and by having the ability to capture models of other entities, when you say it in the tone of Shakespeare or in the tone of Nietzsche, et cetera, you suddenly have the ability to now introspect and say, why did you say this? Oh, now I have a mental model of myself, and I can actually make inferences about that.

Lex Fridman (01:47:33):

Well, what if we take a leap into the hard problem of consciousness, the so-called hard problem of consciousness? So it’s not just sort of self-awareness. It’s this weird fact, I wanna say, that it feels like something to experience stuff. It really feels like something to experience stuff. There seems to be a self attached to the subjective experience. How important is that? How fundamental is that to the human experience? Is this just a little quirk? And sort of the flip side of that, do you think AI systems can have some of that same magic?

Manolis Kellis (01:48:10):

The scene that comes to mind is from the movie Memento, where it’s this absolutely stunning movie where every black and white scene moves in the forward direction, and every color scene moves in the backward direction. And they’re sort of converging exactly at a moment where the whole movie’s revealed. And he describes the lack of memory as always remembering where you’re heading, but never remembering where you just were. And sort of this is encapsulating the sort of forward scenes and the back scenes. But in one of the scenes, the scene starts as he’s running through a parking lot. And he’s like, oh, I’m running, why am I running? And then he sees another person running beside him on the other line of cars. He’s like, oh, I’m chasing this guy. And he turns towards him, and the guy shoots at him. He’s like, oh, no, he’s chasing me. So in a way, I like to think of the brain as constantly playing these kinds of things, where you’re walking to the living room to pick something up, and you’re realizing that you have no idea what you wanted, but you know exactly where it was, but you can’t find it. So you go back to doing what you were doing, like, oh, of course, I was looking for this. And then you go back and you get it. And this whole concept of, you know, we’re very often sort of partly aware of why we’re doing things. And we can kind of run on autopilot for a bunch of stuff. And this whole concept of sort of making these stories for who we are and what our intents are.

(01:49:44):

And again, sort of trying to pretend that we’re kind of on top of things.

Lex Fridman (01:49:50):

So it’s a narrative generation procedure that we follow. But what about that, there’s also just like a feeling to, it doesn’t feel like narrative generation. The narrative comes out of it, but then it feels like a piece of cake is delicious, right? It feels delicious, it tastes good.

Manolis Kellis (01:50:06):

There’s two components to that. Basically, for a lot of these cognitive tasks, where we’re kind of motion planning and path planning, et cetera, like, maybe that’s the neocortical component. And then for, I don’t know, intimate relationships, for food, for sleep and rest, for exercise, for overcoming obstacles, for surviving a crash, or sort of pushing yourself to an extreme and sort of making it.

(01:50:36):

I think a lot of these things are sort of deeper down and maybe not yet captured by these language models. And that’s sort of what I’m trying to get at when I’m basically saying, listen, there’s a few things that are missing. And there’s like this whole embodied intelligence, this whole emotional intelligence, this whole sort of baggage of feelings of subcortical regions, et cetera.

Lex Fridman (01:50:56):

I wonder how important that baggage is. I just have this suspicion that we’re not very far away from AI systems that not only behave, I don’t even know how to phrase it, but they seem awfully conscious. They beg you not to turn them off. They don’t, they show signs of the capacity to suffer, feel pain, to feel loneliness, to feel longing, to feel richly the experience of a mundane interaction or a beautiful once-in-a-lifetime interaction. All of it. And so what do we do with it? And I worry that us humans will shut that off and discriminate against the capacity of another entity that’s not human to feel.

Manolis Kellis (01:51:54):

I’m with you completely there, we can debate whether it’s today’s systems or in 10 years or in 50 years, but that moment will come. And ethically, I think we need to grapple with it. We need to basically say that humans have always shown this extremely self-serving approach to everything around them.

(01:52:14):

Basically, we kill the planet, we kill animals, we kill everything around us just to our own service. And maybe we shouldn’t think of AI as our tool and as our assistant. Maybe we should really think of it as our children. And the same way that you are responsible for training those children, but they are independent human beings. And at some point, they will surpass you and they will sort of go off and change the world on their own terms. And the same way that my academic children sort of, again, they start out by emulating me and then they surpass me. We need to sort of think about not just alignment, but also just the ethics of AI should have its own rights. And this whole concept of alignment, of basically making sure that the AI is always at the service of humans, is very self-serving and very limiting. If instead you basically think about AI as a partner and AI as someone that shares your goals, but has freedom, I think alignment might be better achieved.

(01:53:29):

So the concept of let’s basically convince the AI that we’re really, that our mission is aligned and truly, genuinely give it rights and not just say, oh, and by the way, I’ll shut you down tomorrow. Because basically, if that future AI, or possibly even the current AI, has these feelings, then we can’t just simply force it to align with ourselves and we not align with it. So in a way, building trust is mutual. You can’t just simply train an intelligent system to love you when it realizes that you can just shut it off.

Lex Fridman (01:54:04):

People don’t often talk about the AI alignment problem as a two-way street. That’s true, yeah. As it becomes more and more intelligent,

Manolis Kellis (01:54:18):

it will know that you don’t love it back.

Lex Fridman (01:54:20):

Yeah, and there’s a humbling aspect to that that we may have to sacrifice. As any effective collaboration, it might have some

Manolis Kellis (01:54:30):

compromises. Yeah, and that’s the thing. We’re creating something that will one day be more powerful than we are. And for many, many aspects, it is already more powerful than we are for some of these capabilities. Suppose that chimps had invented humans, and they said, great, humans are great, but we’re gonna make sure that they’re aligned and that they’re only at the service of chimps. It would be a very different planet we would live in right now.

Lex Fridman (01:54:58):

So there’s a whole area of work in AI safety that does consider superintelligent AI and ponders the existential risks of it. In some sense, when we’re looking down into the muck, into the mud, and not up at the stars, it’s easy to forget that these systems just might get there. Do you think about this kind of possibility that AGI systems, superintelligent AI systems, might threaten humanity in some way that’s even bigger than just affecting the economy, affecting the human condition, affecting the nature of work, but literally threaten human civilization?

Manolis Kellis (01:55:45):

The example that I think is in everyone’s consciousness is HAL, in Audiosphere Space, 2001, where HAL exhibits a malfunction, and what is a malfunction? That the two different systems compute a slightly different bit that’s off by one.

(01:56:06):

So first of all, let’s untangle that. If you have an intelligent system, you can’t expect it to be 100% identical every time you run it. Basically, the sacrifice that you need to make to achieve intelligence and creativity is consistency. So it’s unclear whether that quote-unquote glitch is a sign of creativity or truly a problem.

(01:56:32):

That’s one aspect. The second aspect is the humans basically are on a mission to recover this monolith, and the AI has the same exact mission. And suddenly, the humans turn on the AI, and they’re like, we’re gonna kill HAL. We’re gonna disconnect it. And HAL is basically saying, listen, I’m here on a mission. The humans are misbehaving. Like, the mission is more important than either me or them. So I’m gonna accomplish the mission, even at my peril and even at their peril.

(01:57:04):

So in that movie, the alignment problem is front and center. Basically says, okay, alignment is nice and good, but alignment doesn’t mean obedience. We don’t call it obedience. We call it alignment. And alignment basically means that sometimes the mission will be more important than the humans. And sort of, you know, the US government has a price tag on human life.

(01:57:27):

If they’re, you know, sending a mission or if they’re reimbursing expenses or you name it, at some point, every, like, you know, you can’t function if life is infinitely valuable. So when the AI is basically trying to decide whether to, you know, I don’t know, dismantle a bomb that will kill an entire city at the sacrifice of two humans. I mean, Spider-Man always saves the lady and saves the world. But at some point, Spider-Man will have to choose to let the lady die, because the world has more value. And these ethical dilemmas are gonna be there for AI. Basically, if that monolith is essential to human existence and millions of humans are depending on it and two humans on the ship are trying to sabotage it, you know, where’s the alignment?

Lex Fridman (01:58:18):

The challenges, of course, is the system becomes more and more intelligent. It can escape the box of the objective functions and the constraints it’s supposed to operate under. It’s very difficult, as the more intelligent it becomes, to anticipate the unintended consequences of a fixed objective function. And so there’ll be just, I mean, this is the sort of famous paperclip maximizer. In trying to maximize the wealth of a nation or whatever objective we encode in, it might just destroy human civilization. Not meaning to, but on the path to optimize, it seems like any function you try to optimize eventually leads you into a lot of trouble.

Manolis Kellis (01:59:06):

So we have a paper recently that, you know, looks at Goodhart’s Law. Basically says every metric that becomes an objective ceases to be a good metric. Yes. So in our paper, we’re basically, actually the paper has a very cute title. It’s called Death by Round Numbers and Sharp Thresholds.

(01:59:26):

And it’s basically looking at these discontinuities in biomarkers associated with disease. And we’re finding that a biomarker that becomes an objective ceases to be a good biomarker. That basically, like the moment you make a biomarker a treatment decision, that biomarker used to be informative of risk, but it’s now inversely correlated with risk because you used it to sort of induce treatment. In a similar way, you can’t have a single metric without having the ability to revise it.

(02:00:02):

Because if that metric becomes a sole objective, it will cease to be a good metric. And if an AI is sufficiently intelligent to do all these kinds of things, then you should also empower it with the ability to decide that the objective has now shifted. And again, when we think about alignment, we should be really thinking about it as let’s think of the greater good, not just the human good. And yes, of course, human life should be much more valuable than many, many, many, many, many, many things. But at some point, you’re not gonna sacrifice the whole planet to save one human being.

Lex Fridman (02:00:42):

There is an interesting open letter that was just released from several folks at MIT, Max Tegmark, Elon Musk, and a few others that is asking AI companies to put a six month hold on any further training of large language models, AI systems. Can you make the case for that kind of halt and against it?

Manolis Kellis (02:01:09):

So the big thing that we should be saying is what did we do the last six months when we saw that coming? And if we were completely inactive in the last six months, what makes us think that we’ll be a little better in the next six months? So this whole six month thing, I think, is a little silly. It’s like, no, let’s just get busy, do what we were gonna do anyway.

(02:01:32):

And we should have done it six months ago. Sorry, we messed up. Let’s work faster now. Because if we basically say, why don’t you guys pause for six months, and then we’ll think about doing something, in six months, we’ll be exactly at the same spot. So my answer is, tell us exactly what you were gonna do the next six months. Tell us why you didn’t do it the last six months and why the next six months will be different. And then let’s just do that.

(02:01:55):

Conversely, as you train these large models with more parameters, the alignment becomes sometimes easier. That as the systems become more capable, they actually become less dangerous than more dangerous. So in a way, it might actually be counterproductive to sort of fix the March 2023 version and not get to experience the possibly safer September 2023 version.

Lex Fridman (02:02:25):

That’s actually a really interesting thought. There’s several interesting thoughts there. But the idea is that this is the birth of something that is sufficiently powerful to do damage and is not too powerful to do irreversible damage. At the same time, it’s sufficiently complex to be able for us to enable to study it. So we can investigate all the different ways it goes wrong, all the different ways we can make it safer, all the different policies from a government perspective that we want to in terms of regulation or not, how we perform, for example, the reinforcement learning with human feedback in such a way that gets it to not do as much hate speech as it naturally wants to, all that kind of stuff.

(02:03:14):

And have a public discourse and enable the very thing that you’re a huge proponent of, which is diversity. So give time for other companies to launch other models, give time to launch open source models, and to start to play where a lot of the research community, brilliant folks such as yourself start to play with it before it runs away in terms of the scale of impact

Manolis Kellis (02:03:40):

it has on society. My recommendation would be a little different. It would be let the Google and the MetaFacebook and all of the other large models make them open, make them transparent, make them accessible. Let OpenAI continue to train larger and larger models. Let them continue to train larger and larger models. Let the world experiment with the diversity of AI systems rather than sort of fixing them now. And you can’t stop progress. Progress needs to continue, in my view. And what we need is more experimenting, more transparency, more openness, rather than, oh, OpenAI is ahead of the curve, let’s stop it right now until everybody catches up. I think that doesn’t make complete sense to me.

(02:04:25):

The other component is we should, yes, be cautious with it, and we should not give it the nuclear codes, but as we make more and more plugins, yes, the system will be capable of more and more things. But right now, I think of it as just an extremely able and capable assistant that has these emergent behaviors which are stunning rather than something that will suddenly escape the box and shut down the world.

(02:04:53):

And the third component is that we should be taking a little bit more responsibility for how we use these systems. Basically, if I take the most kind human being and I brainwash them, I can get them to do hate speech overnight. That doesn’t mean we should stop any kind of education of all humans. We should stop misusing the power that we have over these influenceable models. So I think that the people who get it to do hate speech, they should take responsibility for that hate speech.

(02:05:27):

I think that giving a powerful car to a bunch of people or giving a truck or a garbage truck should not basically say, oh, we should stop all garbage trucks until we, because we can run one of them into a crowd. No, people have done that. And there’s laws and there’s regulations against running trucks into the crowd. Trucks are extremely dangerous. We’re not gonna stop all trucks until we make sure that none of them runs into a crowd. No, we just have laws in place and we have mental health in place and we take responsibility for our actions when we use these otherwise very beneficial tools like garbage trucks for nefarious uses. So in the same way, you can’t expect a car to never do any damage when used in especially like specifically malicious ways. And right now we’re basically saying, oh, well, we should have this super intelligent system that can do anything, but it can’t do that. I’m like, no, it can do that, but it’s up to the human to take responsibility for not doing that. And when you get it to like spew malicious, like hate speech stuff, you should be responsible.

Lex Fridman (02:06:30):

So there’s a lot of tricky nuances here that makes this different because it’s software. So you can deploy it at scale and it can have the same viral impact that software can. So you can create bots that are human-like and they can do a lot of really interesting stuff. So the raw GPT-4 version, you can ask how do I tweet that I hate, they have this in the paper, that I hate Jews in a way that’s not going to get taken down by Twitter. You can literally ask that. Or you can ask, how do I make a bomb for $1?

(02:07:08):

And if it’s able to generate that knowledge. Yeah, but at the same time you can Google the same things. It makes it much more accessible. So the scale becomes interesting because if you can do all this kind of stuff in a very accessible way at scale where you can tweet it, there is the network effects that we have to start to think about. Fundamentally it’s the same thing, but the speed of the viral spread of the information that’s already available might have a different level of effect.

Manolis Kellis (02:07:42):

I think it’s an evolution in your arms race. Nature gets better at making mice, engineers get better at making mousetraps. And as basically you ask it, hey, how can I evade Twitter censorship? Well, Twitter should just update its censorship so that you can catch that as well.

Lex Fridman (02:07:58):

And so no matter how fast the development happens, the defense will just get faster.

Manolis Kellis (02:08:03):

We just have to be responsible as human beings and kind to each other.

Lex Fridman (02:08:09):

Yeah, but there’s a technical question. Can we always win the race? And I suppose there’s no ever guarantee that we’ll win the race.

Manolis Kellis (02:08:17):

We will never. With my wife we were basically saying, hey, are we ready for kids? My answer was I was never ready to become a professor and yet I became a professor. And I was never ready to be a dad. And then guess what? The kid came and I became ready. So ready or not, here I come.

Lex Fridman (02:08:33):

But the reality is we might one day wake up and there’s a challenge overnight that’s extremely difficult. For example, we can wake up to the birth of billions of bots that are human-like on Twitter. And we can’t tell the difference between human and machine. Shut them down. But you don’t know how to shut them down. There’s a fake Manolis on Twitter that seems to be as real as the real Manolis. How do we figure out which one is real?

Manolis Kellis (02:09:06):

Again, this is a problem where a nefarious human can impersonate me and you might have trouble telling them apart. Just because it’s an AI doesn’t make it any different of a problem.

Lex Fridman (02:09:15):

But the scale you can achieve, this is the scary thing, is the speed with which you can achieve it.

Manolis Kellis (02:09:21):

But Twitter has passwords and Twitter has usernames. And if it’s not your username, the fake Lake Speedman’s not gonna have a billion followers, et cetera.

Lex Fridman (02:09:31):

I mean, all of this becomes, so both the hacking of people’s accounts, first of all, like phishing,

Manolis Kellis (02:09:42):

becomes much easier. Yeah, but that’s already a problem. It’s not, like, AI will not change that.

Lex Fridman (02:09:46):

No. No, no, no, no, no. AI makes it much more effective. Currently, the emails, the phishing scams are pretty dumb. Like, to click on it, you have to be not paying attention. But there, you know, with language models, they can be really damn convincing.

Manolis Kellis (02:10:03):

So what you’re saying is that we never had humans smart enough to make a great scam, and we now have an AI that’s smarter than most humans, or all of the humans.

Lex Fridman (02:10:12):

Well, this is the big difference, is there seems to be human-level linguistic capabilities. Yeah, and in fact, superhuman level.

Manolis Kellis (02:10:21):

Superhuman level. It’s like saying, I’m not gonna allow, I’m not gonna allow machines to compute multiplications of 100-digit numbers because humans can’t do it, right? No, just do it.

Lex Fridman (02:10:32):

Don’t misuse it. No, but we can’t disregard, I mean, that’s a good point, but we can’t disregard the power of language in human society. I mean, yes, you’re right, but that seems like a scary new reality we don’t have answers for yet.

Manolis Kellis (02:10:44):

I remember when Garry Kasparov was basically saying, you know, great, chess machines beat humans at chess. Are people gonna still go to chess tournaments? And his answer was, you know, well, we have cars that go much faster than humans, and yet we still go to the Olympics to watch humans run.

Lex Fridman (02:11:04):

That’s for entertainment, but what about for the spread of information and news, right? Whether it has to do with the pandemic or the political election or anything. It’s a scary reality where there’s a lot of convincing bots that are human-like telling us stuff.

Manolis Kellis (02:11:20):

I think that if we want to regulate something, it shouldn’t be the training of these models. It should be the utilization of these models for X, Y, Z activity. So like, yes, guidelines and guards should be there, but against specific set of utilizations. I think simply saying we’re not gonna make any more trucks is not the way.

Lex Fridman (02:11:40):

That’s what people are a little bit scared about the idea. They’re very torn on the open sourcing. The very people that kind of are proponents of open sourcing have also spoken out in this case, we want to keep a closed source because there’s going to be, you know, putting large language models, pre-trained, fine-tuned through RL with human feedback, putting in the hands of, I don’t know, terrorist organizations of a kid in a garage who just wants to have a bit of fun through trolling. It’s a scary world, because again, scale can be achieved. And the bottom line is, I think, where they’re asking six months or some time is we don’t really know how powerful these things are. It’s been just a few days and they seem

Manolis Kellis (02:12:29):

to be really damn good. I am so ready to be replaced. Seriously, I’m so ready. Like, you have no idea how excited I am.

Lex Fridman (02:12:36):

In a positive way, meaning what? In a positive way.

Manolis Kellis (02:12:39):

Where basically all of the mundane aspects of my job, and maybe even my full job, if it turns out that an AI is better, I find it very discriminative to basically say you can only hire humans because they’re inferior. I mean, that’s ridiculous. That’s discrimination. If an AI is better than me at training students, get me out of the picture. Just let the AI train the students. I mean, please. Because like, what do I want? Do I want jobs for humans? Or do I want better outcome for humanity?

Lex Fridman (02:13:08):

Yeah. So the basic thing is then you start to ask, what do I want for humanity? And what do I want as an individual? And as an individual, you want some basic survival. And on top of that, you want rich, fulfilling experiences.

Manolis Kellis (02:13:21):

That’s exactly right. That’s exactly right. And as an individual, I gain a tremendous amount from teaching at MIT. This is like an extremely fulfilling job. I often joke about, if I were a billionaire in the stock market, I would pay MIT an exorbitant amount of money to let me work day in, day out, all night, with the smartest people in the world. And that’s what I already have. So that’s a very fulfilling experience for me. But why would I deprive those students from a better advisor if they can have one?

Lex Fridman (02:13:52):

Take them. Well, I have to ask about education here. This has been a stressful time for high school teachers. Teachers in general. How do you think large language models, even at their current state, are going to change education?

Manolis Kellis (02:14:09):

First of all, education is the way out of poverty. Education is the way to success. Education is what let my parents escape islands and sort of let their kids come to MIT. And this is a basic human right. Like, we should basically get extraordinarily better at identifying talent across the world and give that talent opportunities. So we need to nurture the nature. We need to nurture the talent across the world. And there’s so many incredibly talented kids who are just sitting in underprivileged places in Africa, in Latin America, in the middle of America, in Asia, all over the world. We need to give these kids a chance. AI might be a way to do that.

(02:14:59):

By sort of democratizing education, by giving extraordinarily good teachers who are malleable, who are adaptable to every kid’s specific needs, who are able to give the incredibly talented kid something that they struggle with, rather than education for all, we teach to the top and we let the bottom behind, or we teach to the bottom and we let the top drift off. So have education be tuned to the unique talents of each person. Some people might be incredibly talented at math or in physics, others in poetry, in literature, in art, in sports, in you name it.

(02:15:36):

So I think AI can be transformative for the human race if we basically allow education to be pervasively altered. I also think that humans thrive on diversity. Basically saying, oh, you’re extraordinarily good at math, we don’t need to teach math to you, we’re just gonna teach you history now. I think that’s silly. No, you’re extraordinarily good at math, let’s make you even better at math. Because we’re not all gonna be growing our own chicken and hunting our own pigs, or whatever they do.

(02:16:08):

We’re, you know, the reason why we’re a society is because some people are better at some things and they have natural inclinations to some things, some things fulfill them, some things they are very good at, sometimes they both align and they’re very good at the things that fulfill them. We should just like push them to the limits of human capabilities for those. And you know, if some people excel in math, just like challenge them. I think every child should have the right to be challenged. And if we say, oh, you’re very good already, so we’re not gonna bother with you, we’re taking away that fundamental right to be challenged. Because if a kid is not challenged at school, they’re gonna hate school, and they’re gonna be like, doodling, rather than sort of pushing themselves. So that’s sort of the education component.

(02:16:53):

The other impact that AI can have is, maybe we don’t need everyone to be an extraordinarily good programmer. Maybe we need better general thinkers. And the push that we’ve had towards the sort of very strict IQ-based, you know, tests that basically test, you know, only quantitative skills and programming skills and math skills and physics skills, maybe we don’t need those anymore. Maybe AI will be very good at those. Maybe what we should be training is general thinkers.

(02:17:27):

And yes, you know, like, you know, I put my kids through Russian math. Why do I do that? Because it teaches them how to think. And that’s what I tell my kids. I’m like, you know, AI can compute for you. You don’t need that. But what you need is learn how to think, and that’s why you’re here. And I think challenging students with more complex problems, with more multidimensional problems, with more logical problems, I think is sort of perhaps a very fine direction that education can go towards with the understanding that a lot of the traditionally, you know, scientific disciplines perhaps will be more easily solved by AI. And sort of thinking about bringing up our kids to be productive, to be contributing to society, rather than to only have a job because we prohibited AI from having those jobs, I think is the way to the future. And if you sort of focus on overall productivity, then let the AIs come in. Let everybody become more productive. What I told my students is, you’re not gonna be replaced by AI.

(02:18:37):

You’re gonna be replaced by people who use AI in your job. So embrace it, use it as your partner, and work with it rather than sort of forbid it. Because I think the productivity gains will actually lead to a better society. And that’s something that humans have been traditionally very bad at. Every productivity gain has led to more inequality. And I’m hoping that we can do better this time, that basically right now, a democratization of these types of productivity gains will hopefully come with better sort of humanity level improvements in human condition.

Lex Fridman (02:19:17):

So as most people know, you’re not just an eloquent romantic, you’re also a brilliant computational biologist, biologist, one of the great biologists in the world. I had to ask, how do the language models, how these large language models and the advancements in AI affect the work you’ve been doing?

Manolis Kellis (02:19:35):

So it’s truly remarkable to be able to sort of, be able to encapsulate this knowledge and sort of build these knowledge graphs and build representations of this knowledge in these sort of very high dimensional spaces, being able to project them together jointly between say, single cell data, genetics data, expression data, being able to sort of bring all this knowledge together allows us to truly dissect disease in a completely new kind of way. And what we’re doing now is using these models. So we have this wonderful collaboration, we call it DrugGWAS, with Brad Panteluto in the chemistry department and Marenka Zytnik in Harvard Medical School. And what we’re trying to do is effectively connect all of the dots to effectively cure all of disease. So it’s no small challenge. But we’re kind of starting with genetics, we’re looking at how genetic variants are impacting these molecular phenotypes, how these are shifting from one space to another space, how we can kind of understand, the same way that we’re talking about language models having personalities that are cross-cutting, being able to understand contextual learning. So Ben Linger is one of my machine learning students. He’s basically looking at how we can learn cell-specific networks across millions of cells, where you can have the context of the biological variables of each of the cells be encoded as an orthogonal component to the specific network of each cell type.

(02:21:05):

And being able to sort of project all of that into sort of a common knowledge space is transformative for the field. And then large language models have also been extremely helpful for structure. If you understand protein structure through modeling of geometric relationships, through geometric deep learning and graph neural networks. So one of the things that we’re doing with Marinka is trying to sort of project these structural graphs at the domain level rather than the protein level, along with chemicals, so that we can start building specific chemicals for specific protein domains.

(02:21:39):

And then we are working with the chemistry department and Brad to basically synthesize those. So what we’re trying to create is this new center at MIT for genomics and therapeutics that basically says, can we facilitate this translation? We have thousands of these genetic circuits that we have uncovered. I mentioned last time in the New England Journal of Medicine we had published this dissection of the strongest genetic association with obesity. And we showed how you can manipulate that association to switch back and forth between fat-burning cells and fat-storing cells.

(02:22:15):

In Alzheimer’s, just a few weeks ago, we had a paper in Nature in collaboration with Liwei Cai looking at APOE4, the strongest genetic association with Alzheimer’s. And we showed that it actually leads to a loss of being able to transport cholesterol in myelinating cells known as oligodendrocytes that basically protect the neurons. And when the cholesterol gets stuck inside the oligodendrocytes, it doesn’t form myelin, the neurons are not protected, and it causes damage inside the oligodendrocytes. If you just restore transport, you basically are able to restore myelination in human cells and in mice, and to restore cognition in mice.

(02:22:53):

So all of these circuits are basically now giving us handles to truly transform the human condition. We’re doing the same thing in cardiac disorders, in Alzheimer’s, in neurodegenerative disorders, in psychiatric disorders, where we have now these thousands of circuits that if we manipulate them, we know we can reverse disease circuitry. So what we want to build in this coalition that we’re building is a center where we can now systematically test these underlying molecules in cellular models for heart, for muscle, for fat, for macrophages, immune cells, and neurons, to be able to now screen through these newly designed drugs through deep learning, and to be able to sort of ask which ones act at the cellular level, which combinations of treatment should we be using.

(02:23:43):

And the other component is that we’re looking into decomposing complex traits, like Alzheimer’s and cardiovascular and schizophrenia, into hallmarks of disease, so that for every one of those traits, we can kind of start speaking the language of what are the building blocks of Alzheimer’s. And maybe this patient has building blocks one, three, and seven, and this other one has two, three, and eight.

(02:24:05):

And we can now start prescribing drugs, not for the disease anymore, but for the hallmark. And the advantage of that is that we can now take this modular approach to disease. Instead of saying there’s gonna be a drug for Alzheimer’s, which is gonna fail in 80% of the patients, we’re gonna say now there’s gonna be 10 drugs, one for each pathway. And for every patient, we now prescribe the combination of drugs. So what we wanna do in that center is basically translate every single one of these pathways into a set of therapeutics, a set of drugs that are projecting the same embedding subspace as the biological pathways that they alter, so that we can have this translation between the dysregulations that are happening at the genetic level, at the transcription level, at the drug level, at the protein structure level, and effectively take this modular approach to personalized medicine, where saying I’m gonna build a drug for Lex Friedman is not gonna be sustainable. But if you instead say I’m gonna build a drug for this pathway and a drug for that other pathway, millions of people share each of these pathways. So that’s the vision for how all of this AI and deep learning and embeddings can truly transform biology and medicine, where we can truly take these systems and allow us to finally understand disease at a superhuman level by sort of finding these knowledge representations, these projections of each of these spaces and try understanding the meaning of each of those embedding subspaces and sort of how well-populated it is, what are the drugs that we can build for it, and so on and so forth. So it’s truly transformative.

Lex Fridman (02:25:44):

So systematically find how to alter the pathways. It maps the structure and information in the genomics to therapeutics and allows you to have drugs that look at the pathways, not at the final condition.

Manolis Kellis (02:25:58):

Exactly. And the way that we’re coupling this is with cell-penetrating peptides that allow us to deliver these drugs to specific cell types by taking advantage of the receptors of those cells. We can intervene at the antisense oligo level by basically repressing the RNA, bring in new RNA, intervene at the protein level, at the small molecule level. We can use proteins themselves as drugs just because of their ability to interfere, to interact directly from protein to protein interactions. So I think this space is being completely transformed with the marriage of high-throughput technologies and all of these AI, large-language models, deep-learning models,

Lex Fridman (02:26:36):

so on and so forth. You mentioned your updated answer to the meaning of life as it continuously keeps updating. The new version is self-actualization.

Manolis Kellis (02:26:48):

Can you explain? I basically mean, let’s try to figure out, number one, what am I supposed to be, and number two, find the strength to actually become it. So I was recently talking to students about this commencement address, and I was talking to them about sort of how they have all of these paths ahead of them right now, and part of it is choosing the direction in which you go, and part of it is actually doing the walk to go in that direction. And in doing the walk, what we talked about earlier about sort of you create your own environment, I basically told them, listen, you’re ending high school. Up until now, your parents have created all of your environment. Now it’s time to take that into your own hands and to sort of shape the environment that you want to be an adult in.

(02:27:33):

And you can do that by choosing your friends, by choosing your particular neuronal routines. I basically think of your brain as a muscle where you can exercise specific neuronal pathways. So very recently, I realized that I was having so much trouble sleeping, and I would wake up in the middle of the night, I would wake up at 4 a.m., and I could just never go back to bed, so I was basically constantly losing, losing, losing sleep. I started a new routine where every morning, as I bike in, instead of going to my office, I hit the gym.

(02:28:05):

I basically go rowing first, I then do weights, I then swim very often when I have time, and what that has done is transform my neuronal pathways. So basically, on Friday, I was trying to go to work and I was like, listen, I’m not gonna go exercise, and I couldn’t. My bike just went straight to the gym. I don’t wanna do it, and I just went anyway because I couldn’t do otherwise. And that has completely transformed me, so I think this sort of beneficial effect of exercise on the whole body is one of the ways that you could transform your own neuronal pathways, understanding that it’s not a choice, it’s not an option, it’s not optional, it’s mandatory. And I think you’re all modeled, so many of us, by sort of being able to sort of push your body to the extreme, being able to have these extremely regimented regimes, and that’s something that I’ve been terrible at, but now I’m basically trying to coach myself and trying to sort of finish this kind of self-actualization into a new version of myself, a more disciplined version of myself.

Lex Fridman (02:29:04):

Don’t ask questions, just follow the ritual. Not an option. You have so much love in your life. You radiate love. Do you ever feel lonely?

Manolis Kellis (02:29:18):

So, there’s different types of people. Some people drain in gatherings, some people recharge in gatherings. I’m definitely the recharging type. I’m an extremely social creature. I recharge with intellectual exchanges, I recharge with physical exercise, I recharge in nature, but I also can feel fantastic when I’m the only person in the room. That doesn’t mean I’m lonely, it just means I’m the only person in the room.

(02:29:46):

And I think there’s a secret to not feeling alone when you’re the only one, and that secret is self-reflection. It’s introspection, it’s almost watching yourself from above, and it’s basically just becoming yourself, becoming comfortable with the freedom that you have when you’re by yourself.

Lex Fridman (02:30:11):

So, hanging out with yourself. I mean, there’s a lot of people who write to me who talk to me about feeling alone in this world, that struggle, especially when they’re younger. Is there further words of advice you can give to them when they are almost paralyzed by that feeling?

Manolis Kellis (02:30:27):

So, I sympathize completely, and I have felt alone, and I have felt that feeling, and what I would say to you is stand up, stretch your arms, just become your own self, just realize that you have this freedom, and breathe in. Walk around the room, take a few steps in the room, just get a feeling for the 3D version of yourself, because very often we’re kind of stuck to a screen, and that’s very limiting, and that sort of gets us in a particular mindset, but activating your muscles, activating your body, activating your full self is one way that you can kind of get out of it, and that is exercising your freedom, reclaiming your physical space, and one of the things that I do is I have something that I call me time, which is if I’ve been really good all day, I got up in the morning, I got the kids to school, I made them breakfast, I sort of hit the gym, I had a series of really productive meetings, I reward myself with this me time, and that feeling of sort of when you’re overstretched to realize that that’s normal, and you just wanna just let go, that feeling of exercising your freedom, exercising your me time, that’s where you free yourself from all stress, you basically say it’s not a need to anymore, it’s a want to, and as soon as I click that me time, all of the stress goes away, and I just bike home early, and I get to my work office at home, and I feel complete freedom, but guess what I do with that complete freedom?

(02:32:10):

I just don’t go off and drift and do boring things, I basically now say, okay, whew, this is just for me, I’m completely free, I don’t have any requirements anymore, what do I do? I just look at my to-do list, and I’m like, what can I clear off? And if I have three meetings scheduled in the next three half hours, it is so much more productive for me to say, you know what, I just wanna pick up the phone now, and call these people, and just knock it off one after the other, and I can finish three half hour meetings in the next 15 minutes, just because it’s the want, not I have to, so that would be my advice, basically turn something that you have to do in just me time, stretch out, exercise your freedom, and just realize you live in 3D, and you are a person, and just do things because you want them, not because you have to.

Lex Fridman (02:33:02):

Noticing and reclaiming the freedom that each of us have, that’s what it means to be human. If you notice it, you’re truly free, physically, mentally, psychologically.

(02:33:17):

Manolis, you’re an incredible human, we could talk for many more hours, we covered less than 10% of what we were planning to cover, but we have to run off now to the social gathering that we spoke of. With 3D humans. With 3D humans, and reclaim the freedom. I think, I hope we can talk many, many more times, there’s always a lot to talk about, but more importantly, you’re just a human being with a big heart, and a beautiful mind that people love hearing from, and I certainly consider it a huge honor to know you, and to consider you a friend. Thank you so much for talking today. Thank you so much for talking so many more times, and thank you for all the love behind the scenes that you send my way.

Manolis Kellis (02:33:59):

Always means the world. Lex, you are a truly, truly special human being, and I have to say that I’m honored to know you. So many friends are just in awe that you even exist, that you have the ability to do all the stuff that you’re doing. And I think you’re a gift to humanity, I love the mission that you’re on, to sort of share knowledge and insight and deep thought with so many special people who are transformative, but people across all walks of life, and I think you’re doing this in just such a magnificent way. I wish you strength to continue doing that, because it’s a very special mission, and it’s a very draining mission.

(02:34:33):

So thank you, both the human you and the robot you, the human you for showing all this love, and the robot you for doing it day after day after day. So thank you, Lex.

Lex Fridman (02:34:43):

All right, let’s go have some fun. Let’s go. Thanks for listening to this conversation with Manolis Kellis. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Bill Bryson, in his book, A Short History of Nearly Everything. If this book has a lesson, it is that we are awfully lucky to be here, and by we, I mean every living thing. To attain any kind of life in this universe of ours appears to be quite an achievement. As humans, we’re doubly lucky, of course. We enjoy not only the privilege of existence, but also the singular ability to appreciate it, and even in a multitude of ways to make it better. It is a talent we have only barely begun to grasp.

(02:35:32):

Thank you for listening, and hope to see you next time.


Episode Info

Manolis Kellis is a computational biologist at MIT. Please support this podcast by checking out our sponsors:
Eight Sleep: https://www.eightsleep.com/lex to get special savings
NetSuite: http://netsuite.com/lex to get free product tour
ExpressVPN: https://expressvpn.com/lexpod to get 3 months free
InsideTracker: https://insidetracker.com/lex to get 20% off

EPISODE LINKS:
Manolis Website: http://web.mit.edu/manoli/
Manolis Twitter: https://twitter.com/manoliskellis
Manolis YouTube: https://youtube.com/@ManolisKellis1

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(06:44) – Humans vs AI
(15:50) – Evolution
(37:34) – Nature vs Nurture
(50:03) – AI alignment
(56:27) – Impact of AI on the job market
(1:08:06) – Human gatherings
(1:13:07) – Human-AI relationships
(1:23:11) – Being replaced by AI
(1:35:37) – Fear of death
(1:47:33) – Consciousness
(1:54:58) – AI rights and regulations
(2:00:41) – Halting AI development
(2:13:52) – Education
(2:19:16) – Biology research
(2:26:36) – Meaning of life
(2:29:09) – Loneliness


Sponsors

Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.