Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.
Welcome everybody to the first of AI sessions I’ll be hosting, and these are spaces where I will be inviting basically guests in the AI space and having, you know, sort of a dialogue, in this case a trilogue with them, and then we will, you know, take questions. I was going to say that throughout this conversation with Const and and Ala, if you have any questions you can get them to me via DM, or I think for questions DM is best. I’m not sure if I’ll be able to keep up with the chats, but any comments about this space, please use the button on the bottom right. If you so desire, we would appreciate tweeting out this space, however you prefer, and there is a little half a box with the up arrow at the top, and I have today had a chance to actually read a little bit about what Const has been up to, and I’m super impressed. So, I just wanted to start by saying hello, how are you guys doing today, and if you can introduce yourselves and just give a little background, please.
Jacob Steeves (01:18):
Sure. I’ll go first. I think you can hear me. So, I think one thing to start off with is that there’s actually a lot of people in this chat right here that are, you know, working with us. You know, we’re all sort of plebs, we call ourselves, which means that we we’re all part of this decentralized ecosystem of machine intelligence, and we try to be as hierarchical as possible. And, you know, Ala and I have been at this for a very long time. I’ve been working on it for years of my life. Before I was doing BitTensor, I was building neuromorphic chips as a subcontractor for DARPA, and then I also worked at Google. I left both companies to pursue this passion of mine, and I think it’s kind of a similar story with a lot of people here, but I’ll pass it over to Ala.
Very impressive. Thanks, Const. Just making sure, does everybody hear me? Yeah, yeah. Okay, awesome, awesome.
Yeah. Yes, loud and clear.
Const put it very, very well. Kind of a somewhat similar background. I’ve been kind of working on this problem since I joined Const back in 2019. Before that, I was doing distributed artificial intelligence at VMware, and before that, I was also doing applied artificial intelligence in human-centric sensing in academia at the time as well. So, yeah, we’ve both been kind of in AI for a long time, kind of been at the BitTensor problem for, yeah, pre-COVID pretty much, and before that as well. And yeah, it’s nice to be here. Thanks for having us.
Awesome, awesome. It sounds like you both are very intelligent people and have worked on very cool things before, at least have held positions, I suppose, that people might say, you know, DARPA and Google, I mean, those are highly advanced positions to be able to get as a job. So, that’s amazing. You know, I was thinking about how to get into this.
Actually, Const, I’m impressed by, you know, how, in fact, initially in that first space that you joined, you know, someone else reached me and they, like, made a plea, you know, will you allow Const to, like, get a word in? And I’m talking about one of those spaces a few weeks back that they get very hectic and kind of heated and everyone trying to contribute to the discussion. So, I was taken aback by how much, like, you know, how many fans you have and how, like, a couple of other people were reaching out to me saying, hey, you know, you need to hear Const out. And, but in that particular space, I feel like there wasn’t a chance to really have you express yourself. And so, that’s what these AI sessions are for. I was thinking about how to get into this. As you might be aware, these are conversations that should be, they should not involve anything highly technical that the common person cannot understand. But I was thinking, you know, I find myself having to explain to, like, everyday folks or people in my life what a neural network is. So, I’m wondering if we can start there and then from there go on to, you know, open source, scalable, decentralized neural network, and maybe that can lead on to BitTensor and then SubTensor. Does that sound like a good plan?
Jacob Steeves (04:44):
Yeah, that sounds great. So, I think maybe I can do, like, you know, I’m explaining to a I’m explaining to a five-year-old and then, like, go up the stack with neural networks because they start from a really simple idea and then they go to, like, much more complex. And obviously, you know, there’s 4,000 research papers published every single day with neural networks with jargon far beyond even myself and Alla with PhDs and degrees in artificial intelligence. So, like, if I was to explain a neural network to a, like, a five-year-old, I would describe it as, like, an insect, like, an ant capable of using sort of hive mind flowing to learn a pathway from A to B. That’s basically a neural network. It’s a set of It’s a set of components adapting to the flow of energy that kind of, like, an ant is flowing to the, you know, the pheromones sent by other ants to build kind of an energetic flow from energy source to sink. I think that’s a pretty abstract way to talk about it, maybe even to a five-year-old. But, like, moving up the stack, it’s They’re functions. They’re over-parameterized statistic function approximators, which means that they are mathematical equations that you can adapt to a loss function, which is an equation which tells you how close the neural network is to an answer. And then using that approximation to the answer, the distance from the answer, it can slowly adapt the function until it gets closer to the result. And if you do that millions of times per second and billions of times over the training process, you get to something that can approximate very high-level functions, things like projecting a image onto the description of the image or extracting the meaning from a sentence. All of those things fall underneath the not-so-sexy mathematics of statistical function approximation, which is just trying to map a function onto a data distribution. So, hopefully, that was helpful.
Yes, definitely. I’m wondering, what is, what do they call a neural network? Like, can you draw that analogy to the neuron?
Jacob Steeves (07:21):
Sure. Like, the idea behind the neural network, you know, was biologically inspired by people that wanted to understand, you know, how the brain was able to learn things. And the history of neural networks is, you know, partially in line with the history of neuroscience, understanding how these individual cells can behave against really simple rules of behavior, and then over time will learn how to do very complex things. So, the first neural networks were very simple, 1950s and 60s era, then they fell out of popularity for about 30 years until what we call the modern-day gods of AI revived them. Yan Likun, Yoshio Bengio, Geoffrey Hinton, who understood that the best way for us to reach artificial intelligence, strong artificial intelligence, was to build these extremely complex mathematical functions defined by a set of units, and the composition of those units and then by building the system to sort of emulate or, you know, have strong neurological parallels, they could build things that learned very quickly and very well. A neural network is actually a composition of small individual components, which do not require global information to train themselves asynchronously or individually, and this is like the fundamental, you know, zero-to-one innovation of neural networks. We can divide and conquer the problem, each of the neurons can adapt itself independently and still learn as a collective, and that kind of divide and conquer of a problem, you know, it’s a very common technique in computer science, it’s also something that we will probably get into when we talk about, you know, BitTensor and what we’re doing, you know, with this distribution across the internet, but I’ll hand the mic over to Ala because, you know, he has a PhD, actually has more knowledge about this than me in many ways. Thanks, Gans. I think I can maybe try to persist from an educator’s point of view.
I think everyone probably knows, well, most folks probably know, but I think I can maybe try to persist from an educator’s point of view. I probably have done high school math, I’m sure you’re aware of the line that best fits, that’s just y equals mx plus b. That’s really across the x and y-axis, so you’ve really got two dimensions you’re doing this on. On a neural network, you’re doing this over millions and billions and millions of dimensions, so just picture y equals mx plus b, but really on steroids. So one of the reasons it’s called a neural network is namely because it kind of functions like a brain in the sense that the y equals mx plus b, each time we have it, is sort of analogous to a brain neuron, and they’re all interconnected together, and that’s how we kind of introduce some non-delinearity to the system. So really, it’s kind of like whatever you learn in high school math, these lines that best fit just across millions and billions of dimensions, and kind of trying to figure out what the best solution is. That’s kind of the best way I can describe a neural network, and that’s kind of the best way I can describe the neurons, individual neurons of each neural network. I hope that was kind of succinct there. Yes, yes, yes, awesome. So these neural networks,
I mean, the same way that in biology, if you have organisms that might be considered less intelligent, have less neurons and less number of connections between them, then is it fair to say that in the same way, in this sort of a computer system, that the system gets more intelligent by having more neurons? Is that a fair parallel to draw? It’s not a direct one-to-one correlation.
You can’t just add neurons and expect to be more intelligent, but you can say that the more neurons there are, the more depth you have, which really translates to perhaps the more insights you can gain from the data set. Does that sound about right to you, Const?
Jacob Steeves (11:31):
Yeah, exactly. More degrees of freedom in learning these high-level function approximations. You can learn much more complex functions if you have more degrees of freedom. So the patterns that you can pick up can be much more high-level, and that’s what adding neurons to a neural network often achieves and can lead to problems like overfitting, for instance, which is a more specific neural network problem that arises. But yes, the bigger the brain, the better. And if you look at the last 10, 15 years of artificial intelligence research, the joke or the bitter lesson, as they call it, has been that it tends to be the case that just adding more data, compute, and the scale of the neural network is enough to push beyond the last, maybe some more bespoke solutions to a problem. You just want more neurons, more data, more compute in the same place. And I think that that’s one of the things that we’re taking on with going towards internet-scale machine intelligence, is trying to reach into a completely new domain. We could say a larger domain of machine intelligence. Let’s go internet-scale, and that’s where we can get the largest neural networks. That’s where we can get the largest amount of compute collated into one cohesive system. So yeah, it is the case, heuristically, that making neural networks larger tends to be the solution.
Yes. Okay. All right. I’m just referring back to the thought that just extending today’s neural networks without the invention of any new paradigm in technology would make those more intelligent, which is cool to me. And people likely know that if you compare a worm to an organism with a bigger brain, you can see how if you have a bigger brain, you have more processing power. So just quickly, you guys did a good job of this. I’m wondering if you can touch on deep learning a little bit, and maybe speak about what is the significance of deep learning as compared with older or other paradigms of machine learning in AI?
Jacob Steeves (14:08):
Well, this falls in directly into what we were just talking about. Deep learning is effectively just deep neural networks, which means larger ones with more composed layers of neurons, fitting on top of each other. And in the early days of neural network research, the researchers just didn’t have the compute, they didn’t have the GPUs, they didn’t have the CPUs and the budgets to work with incredibly large neurons. And the reason for that was that the number of parameters was, you know, hitting the billion point. And so in order to do as the number of operations that were required to train so many parameters, like if there’s so many parameters, there’s so many neurons, there’s many degrees of freedom. So you need to try out way more permutations of your neural network effectively. So that required a lot of compute. Deep learning was enabled by working with huge datasets and huge clusters of computers, so that we could train even deeper neural networks, having much, much more parameters in the parameter count.
And because of this, we could work on problems which neural networks couldn’t previously work on. They weren’t very good at doing, say, image, compared to, say, the more bespoke solutions that people got out of support vector machines in the late 1990s and early 2000s, where it’s like you come at the problem with a very specific solution to the problem. You go, hey, cool, we want to recognize these edges, let’s actually define edges. And in order to pick up the, you know, and in order to pick up the subtleties of a problem domain, like image, and represent all of those images and all of the parts of the image that you want to understand, you need these deep neural networks. In order to get those deep neural networks, you need a lot of compute. And so that was really the, you know, you could say it was just a continuation of the same thing. Shallow neural networks are just a deep neural network with fewer layers. But this enabling came from, you know, pushing compute, you know, many magnitudes in the last, I would say, what is it now, Alice, since the 2008? Okay, great. So it’s been 14 years.
About 2010-ish, give or take. Yeah, yeah, that sounds right. The other thing I wanted to add here is that traditional neural networks, traditional machine learning, right, which is support vector machines all the way to your random forests, are effectively heuristics and algorithms. They are really good if you know what you’re looking for in your data sets, right? So for example, if I have a data set of flowers, I have to have a domain expert to tell me what differentiates each flower from another for me to be able to create a traditional basic machine learning algorithm that’s able to understand the differences and kind of give me the output that I’m looking for. The advantage of deep learning is that because it eats so much data, like massive amounts of data, it’s actually able to pull and make its own insightful decisions on the features that are coming in from the data set. So for example, a deep learning model applying the same example to flowers, I don’t need to know what petals and sepals and, you know, some flowers are, you know, blue, others are red and so on. It’s going to be able to figure that out on its own because that’s just the way it feels. It’s able to extract insight from the data set without me having the domain expertise for it.
Right. Yeah. Yeah. And that’s, to me, it’s kind of like the magic of deep learning because that system is going to, let’s say if it’s about identifying something like a dog, whether it’s image or voice of a dog or a dog in videos or anything having to do with a dog, that it ends up really knowing what a dog is, maybe even beyond the capacity of a person to ever have instructed to know, is that fair to say?
Jacob Steeves (17:51):
Yeah, absolutely. And what’s cool about what’s happening today, you know, with the stability AI launches, the stable diffusion, the DALI, the DALI-2s, these models are starting to pick up patterns which we didn’t, we couldn’t even imagine.
We didn’t know that existed in what’s like the latent space of the human’s collective subconscious. They’re, you know, oh, an angel in front of a gate. That idea is actually a space inside this neural network. It’s found that dimension. And I think that, you know, that’s the trajectory for artificial intelligence as we know it today. We’re still pushing this edge of, okay, great, what can we do with refining information, refining down all of the data on the internet and extracting these high-level ideas, semantics, you might say, and using that to understand ourselves and be creative.
Artificial intelligence in the 1950s, you know, it was about, you know, solving chess moves, making the decision boundary between two choices or determining whether or not it was a hot dog or a cat, right, in an image. But today, it’s like the artificial intelligence has really almost extended beyond our imaginations so much so that we can use it for creative work. And that’s what we’re seeing, you know, today.
Right, right. God, I’m so impressed with Midjourney and DALI. I haven’t tried them. Is it stable diffusion or unstable diffusion? Because I’ve heard both here on Twitter.
Jacob Steeves (19:34):
It’s stable diffusion. Stable diffusion, right, right.
So, you know, I’m so impressed that when you give them a prompt where there’s no conceivable image basis for it, like, you know, like a dolphin wearing a spacesuit, blah, blah, and further, that in the process of making that image that looks pleasant to me, it made, you know, thousands of artistic decisions that if, like, many of them were not made correctly, that the spacesuit wouldn’t look right on the dolphin.
And that’s just so impressive, you know, that it really is imagining things, you know, based on the prompt. At the same time, I’m finding now that I’m playing with them and I’m very, you know, at the very beginning of playing with the generative, like, art AI, like DALI and Mid Journey and DALI I don’t have access to yet. But I’m realizing that even though you can, you know, you get things that might look, you know, cool or whatnot, that it’s still a bit, it might be just, it’s a bit difficult to get exactly like something that you imagine in your mind, just because it’s not going to, you know, I’m sure maybe it gets a little bit better as you get better at prompting, but still, it’s not the case that I can imagine something in my mind and then put in a prompt that it would give me that, it’s going to give me something that’s going to make sense based on what I put in. But it’s still not what I imagined. But then at the same time, how would it do that? Because I think that would be super cool the moment that you can, that you can, you know, basically you have, you’re imagining something, so you give an AI system a prompt, it gives you something, and then you basically tell it to modify that to get it closer and closer to what you actually imagined. I think that would be something that would be super useful. But so, so impressed with this. Did you guys, I’m wondering, you know, what Dolly 2 does today? I mean, the photorealistic things that it does. I’m just wondering, yourself, like five years ago, would you have expected something like Dolly to be where it is today?
Jacob Steeves (21:41):
I’ll, I’ll let you answer that first.
Frankly, it always confused and surprised me every time, just how quickly AI tends to advance. So the interesting thing about it is that it’s always been the kind of field where you get a bit of an explosive advance, and then everything kind of falls into place. And then it, it kind of pauses for a little while. And then once again, another explosive advance, and it falls into place, and so on. So the very first one happened back in the AlexNet, which was the very first properly created neural network that tackled images. This was back in 2011. It was a huge deal in AI, because up until that point, people were laughing at the idea of a neural network. Like, what? Are you high? What the hell is a neural network? What happened with the AlexNet is it kind of revolutionized machine learning for a little bit. And basically, it introduced this slew of research into neural networks. And then came the GPTs, right? These are the language models. They kind of did the same thing. Natural language processing was done by a very specific kind of neural net. People were kind of happy with that performance. And then the GPTs came and just changed the game with their, actually the transformer came, and then the GPT came after that. And they just changed the game completely with how we process natural language. And the same thing with stable diffusion is happening now. It’s one of those things that is changing the game again in how we do image and text models that kind of work with the two modalities at the same time. And to be frank with you, I did not expect it. But in hindsight, it makes perfect sense that we kind of have eventually merged image and text in this way. Because we became really good at images. We became really good at text. Now, it’s time to kind of do these multi-modal models that do both image and text. And eventually, very soon, I’m sure we’ll see video and text. And I’m sure we’ll see audio and text. And we’ll see audio and image, and then audio and video, and so on. It’s going to get more and more closer to these models that can do everything, as opposed to one very good thing.
Jacob Steeves (23:33):
I think that if you look at the trajectory, if you look backwards, it’s always 2020, right? And in a way, it’s obvious that this would happen. And I think that where artificial intelligence is going forward is in two directions, really. The unsupervised representational learning is currently very interesting to people and is interesting to us about building these high-level representations of language and text and audio. And then from that, you’re able to explore the space of ideas and navigate language and generate things. There’s also the left brain problem that needs to be solved in AI.
And what do you mean by that?
Jacob Steeves (24:31):
Yeah. So the right brain is the creative one, they say. It’s the one that has the intuitive understanding of things. And then there’s the left brain, which is more… And this is a generalization. But generally, it’s the left brain that is more responsible for the detailed solutions to things. And I think if you look at the people that are criticizing artificial intelligence today, that’s what they’re saying is missing, which is the, okay, what I mean by detailed is, draw me a picture of a bike, and then it does a pretty good job with the bike, but it isn’t able to exactly, perfectly get the bike right. Because it gets the amorphous shape of the bike. It doesn’t need to be so hyper-detailed that it perfectly draws all the spokes and the relationships between the spokes.
And that’s something that humans have. They have this ability to kind of zoom in and out of attentive detail from the more amorphous to the very specific. And some people are better at both domains, and some at either domain. I think that AI has really benefited from expanding on the right side. Let’s get like a general idea, like 30,000 feet in the air of what a bicycle is, and then I can draw one for you. I know the latent space of bicycles, but can I move forward to actually invent a new one with the strong precision? And I think that’s where artificial intelligence will have another zero to one. But this trajectory of getting the amorphous understanding, taking all the data in the world and compressing it into understanding that isn’t hyper-hyperspecific, but it’s the latent space or the representational space that you can use to solve the more specific problems, that’s where AI is going in the next five to ten years. And we’re going to see even more amazing results from things like stable diffusion, audio will be solved, language is going to get incredibly advanced. I think that we’re going to be able to write books, narrative styles, all these things. But then we’re going to have to come back and use the fine tuning of our own brains to kind of select the better images that have been created and touch them up. Right, right. Quite incredible. Yeah, I think when you look at
just these generative platforms, your mind goes to, Ala mentioned this, that this is going to soon be animation from prompt, a revision or two later on that the software is going to be photorealistic animation from prompts. And before you know it, someone can sit on their computer a weekend and produce, sew together scenes that they prompted and produce something that could eventually look as good as Avatar. Just one person in a few days of work, let’s say, eventually.
So this, would you agree that, so the reason that we have something like Dolly before we have this before we have this for AI-generated CAD or AI-generated PCBs for electrical engineering, is that kind of because the training data to train these models, images, there’s lots of them and they’re freely available on the internet to crawl. Would you say that this same kind of generation of graphics that might in some cases, let’s say, begin to replace a concept artist would in time extend to other fields of design like in architecture or in CAD?
Jacob Steeves (28:29):
Well, I think it will augment. I think it will augment.
Augment. Augment is a better word, right?
Jacob Steeves (28:35):
I, the people that are being replaced are the ones that won’t adapt to new artistic styles or new work styles. That’s kind of my opinion. But yes, there’s definitely going to be creep into all these domains. And I think that the AI companies that capture that role and that creep into all of these different domains will be trillion-dollar companies because they have this ability to take off sections of people’s workload and solve it for them. That’s why people are so interested in this domain is because the applications are seemingly boundless to apply artificial intelligence to remove people’s hard, to make work less difficult for people and also build AI companies along that ledge.
Right. I think just to add to what Const was saying, for the longest time, AI has been really bad at improvising. It’s very good at maybe imitating something that a human would do, right? For example, most text models nowadays are writing in a style, but it’s a style that’s done by a human at one point. The interesting thing about stable diffusion is it’s starting to come up with its own decisions for how an image should look, right? For example, the example image that Const mentioned, which was a person in front of gates of heaven with angels and everything.
The color palettes, the shape of the gates, the shape of the angel, it’s starting to really just be the AI coming up with it. It’s starting to kind of improvise a little bit on its own in that way. But I think to be able to be powerful, to be powerful enough to beat or to kind of replace a human at something as complicated as CAD design or even PCB designs, it requires a lot more improvisation. And as Const said, I think we’re incrementally getting there, but for now, we’re not quite at that point yet.
Right, right. Right. All right, so we’ve spoken a half hour, and I love these conversations and everything that you guys share because, I mean, in this space, there is not really an expert at the level that you are. So let’s get into the way to get into BitTensor. I was just going to say that one of the problems that I noticed is to have these large neural networks that are capable of doing so much, as you guys have touched on. One issue that I see generally, like, I mean, worldwide about this is that the layperson generally is not going to have access to the kind of hardware resources that the large tech companies have with regards to these things. So a problem is basically that there is such powerful, powerful tools, but they’re only accessible to, basically, corporate giants. And that gets into the open source nature of what you’re doing, and the thing that I read on the intro on BitTensor’s website is that it is an open source, scalable, decentralized neural network.
Is that accurate? I’m wondering if you guys can describe it to us in a way that we might understand and to tell us what is significant about this. And then from there, we can go into what the role of the blockchain is in this endeavor. Sure. Like, I think, you know, you
Jacob Steeves (32:15):
touched on one of the interesting points here about access. Another point is ownership, the ability to actually train these models. All of these things are, you know, quickly becoming out of reach of individuals. And already, individuals have no chance of really creating the amount of compute that is required to train, you know, a novel machine learning model and then own it themselves. And I think that, you know, what we can do instead is we can break up that ownership and we can have us collaborate and share the value that’s being created. So both the profit and the workload is distributed across multiple people. That’s one of the angles to understand what BitTensor is doing, right? We’re building a way of aligning individuals and compute so that we can work on these incredibly large problems without being, you know, without working underneath a particular organization which holds the keys, without requiring donations from a particular organization, and actually still create something that is the largest and most powerful in the world. We don’t just want to create an open source replica of something. We want to be the best in the world and have that ownership across many people. And I think that’s the thing that interested me the most when I got interested in BitTensor, because I was looking at things because I was looking at technologies like Bitcoin, which are really run by a collective of individuals. Sorry, Bitcoin, pardon me. I was looking at Bitcoin. And Bitcoin is aligning individuals through this incentive mechanism. It’s an alignment technology, which then allows us to scale to this super internet scale, peer-to-peer, open source, decentralized. Bitcoin is larger than any other software company in the world in terms of compute. And so that’s what BitTensor is doing, but to the field of artificial intelligence. Let’s build the mechanism for alignment between all of us to, number one, allow us all to have a chance to own this technology, both from a profit perspective, both from a use perspective, and let’s be bigger and better while we do that. And so that’s, I think, BitTensor from my perspective, but I’ll pass
the mic over to Alec. I mean, you kind of hit the nail on the head on that one. Pretty well said. Yeah. But just really the only thing I had to add is that it’s not just, you can think of it more almost as a collaboration, right? If you’re, let’s say back in your university days or high school days or anything really, you’re all working on something similar, basically a similar-esque problem, but everyone’s kind of working on their own problem at the same time, you can always work together to kind of solve that problem together, right? So one of the things that really did not make sense to both of us really during our careers in AI is that knowledge is not compounding. Like if Const creates a state-of-the-art model that’s amazing on text, for example, or an image, and I’m working on something similar and I want to beat Const or unmatch his performance, I still have to train from zero. I can’t use what his model has learned.
Why is that? It doesn’t really make any sense in that way. So in a lot of ways, because knowledge is not compounding, AI is actually very inefficient in this sense, and it’s actually an offender if you really want to go down to brass tacks and GPU usage and stuff like that, because we’re always relearning things that we already know. So really, what is the point here? So that’s partly what we’re trying to solve with BitTensor as well, which is being able to compound knowledge in the network at the same time. Yeah, we’re building the incentives to align
Jacob Steeves (36:22):
us together. We’re also building the protocol for that negotiation so that we can build machine learning models which are talking to other machine learning models and sharing that value. So we like to say that we’re building the neural internet, because for the first time ever, we’re successfully doing machine learning across thousands of individuals, or at least individual computers, across the internet. We’re like the Marshall McEwan global brain, but for AIs. And that means that we can share knowledge that’s been pre-trained and start building this kind of collective consciousness of machine intelligence. We’re getting a little bit far out here, because it’s good to root ourselves in legitimate artificial intelligence research, which we haven’t proved this, but this is the dimension that is untouched in the field, and we’re the first to do it. And this touch that we’re going to get into the blockchain side now, because this is such an expensive, but also such a valuable commodity, we require that commodification layer to allow people to really share this thing, you know. In the early days of the internet, you didn’t need a payment layer, because it’s just email. It’s basically free. But machine intelligence, this stuff is, you know, billion-dollar projects.
So if we want to connect people together, we want to connect this compute together, this knowledge, we have to bake in the incentive first, the value first.
Okay, so you mean the incentive for someone to contribute computing power to the network, is that right? Okay, so is it in some way similar to, correct me if I’m wrong, like there was this project, the SETI project that gathered resources from different, you know, many individual computers to accomplish a specific task. How is BitTensor different?
It’s funny you mention that, yeah, it’s a good example. SETI at home is one example of really a bunch of, what’s the word I’m looking for, trying to distribute the compute and being able to solve it. The problem is that with SETI at home, and there’s another project just like it called Folding at Home, is that everyone’s solving the same problem. And really the one machine that solves the problem first is the one that ends up winning, I guess, quote-unquote, and submits their solution. The problem with that is A, there’s no incentive, and B, if there is incentive, it’s very, very small for the user, right? I don’t really have a stake in this project. I’m just running it on my PlayStation or my computer, but what am I really getting out of this? I’m just being altruistic about it, but that’s it. BitTensor, on the other hand, is you’re solving your own problem. Someone else is solving their own problem. And everyone is kind of collectively working on their own problem, but at the same time, they’re also working on the same modality. So the modality here being text at the current, and currently in BitTensor, but eventually we’re looking to expand it to image, video, audio, and so on. But as long as everyone’s working on text, we can work together to help each other solve our own problems, right? It’s not just one problem being solved, it’s many with everyone working on them, and the incentive is that A, I have a stake because my own problem is being solved, and B, I’m earning something back because I am giving some of my
knowledge to someone else. Okay. So in BitTensor, this network is parsing text. So what is the goal that it’s accomplishing? Is it glint? Is it deciphering the meaning of the text? Or what is
the operation? Almost any problem on text. So everything from math languages to text generation and to summarization and so on, any problem involving text is a problem that we are looking to solve in BitTensor. So it doesn’t really need to be something very specific. You could be coming in with, let’s say, a language generation problem. You want to generate books and want to generate subtitles or something. And someone else is coming in with a language generation problem, but they’re trying to generate something else. Let’s say they’re trying to generate an article because they’re a reporter or something.
By working together, you kind of learn from each other because you’re both kind of working on a similar generation problem. And because you’re both working on a similar problem, even though you have different goals for this problem and a different, really, probably an entirely different neural network, you end up still being able to collaborate and kind of helping each other out to achieve this goal. Okay. Okay. Go ahead. Sorry. Just to add a little caveat. The one thing is that the BitTensor system is really an API, right? It’s a protocol. It’s a protocol. It’s a API, right? It’s a protocol. So what that means is that you as a user, you as a person who brought the neural network, you don’t have a say in who to choose. You don’t know who’s out there. You don’t know who’s running what model. We enabled your model to learn for itself who’s useful and who isn’t, who to talk to and who not to talk to, who is somewhat your word and who’s somewhat not your word. Sorry. Just one last thing I wanted to add. Oh, no. Of course. Of course. You know,
you mentioned that this system enables prior learning to be preserved. I mean, I think that’s important because I can see how all over the world right now, all sorts of people are retraining the similar systems with training that has already probably been done thousands of times, but they just need it for their application. So they’re redoing it. So that is really significant. And of course, decentralization, right? I mean, I’m a big fan of Web3 really because of the ownership aspect, the distribution of that value across the users and stakeholders. I still feel like the world is sleeping on how significant that is. So it would mean that if Twitter were decentralized, it would mean that, you know, I don’t know, my contributions to this platform at some level would reward me with something that eventually would basically mean like equity in the platform or some kind of ownership. And I just can’t state how important that is in a world where, you know, we have centralization of value and we’ve had that going on for decades. And I would actually state that as one of our overarching humanities problems is the unequal distribution of wealth slash value. So that’s being said, this is significant. How is it different from something like the GPT series or GPT-3? Maybe before we move on, because I’m really
Jacob Steeves (43:10):
curious to talk about that, that we were just speaking about with decentralization and how it plays into BitTensor. You know, we were also really fascinated by decentralization of power, which is multiple stakeholders, you know, not absolute power in the hands of one centralized authority. And I think that really is an important and systemic design when we’re building incredibly powerful technologies. And the reason I say that is that we are beginning as a civilization to touch into technologies which have, you know, existential, you know, they’re existentially threatening to us. You know, early days, because of very obvious abuse possibilities, I would say later down the road, about the concentration of power and the abuse of the small to the many. And, you know, we can get very sci-fi the idea that, you know, artificial intelligence can truly lift off and, you know, wipe out the human race or however you look at those, the likelihood of any of those things. If we want something to be aligned with us, we need to make sure that it has, we have control over it and that that control is diffused amongst humanity. So decentralization is the core technology for that proposal. Web 3.0 is the human’s best language for describing and creating systems which come into alignment with communities where many people, you know, read, write, own the technology. So that’s another aspect of what we’re doing here at BitTensor. And, I mean, obviously it’s integrated because we, you know, reading, writing, owning, that’s, it’s collaboration amongst people, it’s sharing ownership and that’s what really drives the incentive.
When we look at this technology, I think that one of the more interesting things is its ability to solve these sort of systemic problems that are, you know, the talk of the town in the AI community. And on our left, we have the ivory tower solutions of cloud computing. So, you know, solutions of corporations that are saying, hey, we’ll take this technology and guide it for humanity. And then maybe on the right, we have the people that just want to say, let’s chaotically open source it. And I think that in the middle is the stewardship that we can, you know, create by using Web3 technologies so that we all have a say aligned through the technology of consensus and ownership. So, yeah, that’s another point I’d like to bring up.
And so just for the audience here, so, you know, open source, that means that the code is transparent and it can be contributed to by any individual, I suppose. Are there disadvantages
Jacob Steeves (46:35):
to being open source? Well, I think open source doesn’t mean that anybody can contribute. Anybody can fork the code and anybody can see it. I think that there are advantages and disadvantages, right? You know, we don’t want to open source the nuclear codes, right? We probably don’t want to show people how to make, you know, nuclear reactors, you know, just anybody all the time, especially if it was very easy. I think that there’s this kind of golden mean when you have protocols which are open, but the ability for people to create ecosystems where there is maybe some censorship and control. I think the beautiful example would be Twitter itself, like Twitter should be just a organization which interfaces with the open source, we could say, or the non-censoring protocol that is Twitter and solves the more specific problems of the hate speech and fake news for its users. But those users have the ability to interface directly with the raw protocol. I think that kind of golden mean between open protocols and closed, you could say, closed interfaces is where, you know, technology should go. And that’s what we’re building at BitTensor, where we have an open protocol, anybody can use it, but people can also build these technologies on top of it where they can solve censorship problems and hate speech problems.
Okay. Wow. Yeah. Incredible. Incredible. Getting a couple of questions, some of them I won’t address, but they are having to do with specifics of like, you know, the cryptocurrency. The one question maybe I will address is, so if you can maybe explain what the memory problem is for neural networks, and they are wondering how are you guys solving that or tackling that?
Jacob Steeves (48:43):
Sorry, did you say the memory problem? Al, are you familiar with that?
Does that mean anything? You mean the catastrophic forgetting problem?
How are you guys solving the memory problem for neural networks? So maybe that is not a good question. I can maybe speak a little bit to the catastrophic forgetting problem.
Maybe that’s what’s meant here, and if it’s not, then I guess… Do you have any more context for everybody?
I don’t, it’s a DM from Al Zorro here in the audience.
Jacob Steeves (49:17):
Maybe that person can ask with more… Al Zorro.
Yes, he’s saying that’s exactly what he means.
Jacob Steeves (49:26):
Catastrophic forgetting. Oh, okay, like the memory, like long-term. Yeah, okay. Yeah, right.
Okay. Yeah, actually, it’s interesting you mentioned that. So the funny thing is, for folks who don’t know, maybe it’s good to delve into that for a little bit before we delve into the solution. Catastrophic forgetting is a problem described in AI that is a little bit hilarious and a little bit very concerning. So the story goes… I forgot what university it’s from, but the researcher took a model that is trained on a neural network that is able… Basically, it’s a self-driving car neural network. So it’s able to kind of recognize objects on the street, everything from a pedestrian to a curb to a bus, car, and etc.
So what he did is he fed it a picture of a stop sign. Now, what he did with the picture is he just changed the color of one pixel. So pictures are made of little tiny, tiny dots, and he just changed the color of one dot from red to green. The model thought that that was no longer a stop sign. It thought that this was an ostrich. And it really just kind of brought into light just how bad some neural networks can be because they’re trained to be very narrow specialists and not resilient generalists. So what that really means is that the neural network, instead of kind of making maybe an incorrect prediction to something that is similar, maybe not a stop sign, it could maybe thought it was a green car or something, but something that’s completely unrelated like an ostrich, really just tells you that it’s just really good at the data that it’s seen. But it’s not very good at data that it’s never seen before. So that’s kind of where the BitTensor protocol, you know, that’s where BitTensor protocol kind of will shine. And that’s because what really are also, what happens to any model in the BitTensor protocol is it’s seeing something called adversarial data. So that’s data that’s never seen before, and that’s data that’s potentially harmful to it. That’s kind of what’s going to cause it to create the incorrect prediction as a result. Now, adversarial data is not really bad. It’s just data that will throw your network’s predictions off. And what you want is a network that is resilient to adversarial data. So what a lot of people are doing nowadays to get around the catastrophic program problem is to feed their neural network all kinds of data, including adversarial data. Now, because BitTensor’s network is so massive, you’re bound to see adversarial data sets. That’s just how it works. It’s part of the system. It’s not a feature. It’s just how it works. So as a result, the catastrophic program problem should, at the very least, be somewhat muted on BitTensor network models, as opposed to maybe perhaps even being taken out completely. I can’t make that claim, only because we haven’t tested that properly yet. But theoretically speaking, on the BitTensor network, the catastrophic forgetting problem should be lessened because of all the data that’s kind of running around all over the place. Constantin, do you have further comment? I heard the aye, aye, aye, aye, aye, aye.
Jacob Steeves (52:17):
There’s so much to this. The catastrophic forgetting paper was actually referenced by the white paper for BitTensor, because in one of the solutions to catastrophic forgetting, the solution is to figure out what neurons in the neural network are useful and which ones are not.
And in order to do that, you use an information set that’s not in the neural network. And in order to do that, you use an information theoretic technique, where you look at each of the neurons, you go, hey, if we remove this neuron, how different will the output of the neural network be? How much worse off will you be? And that’s the mechanism we use to measure the value of the individual peers in the system. So we go, this neuron over here, this peer in the network is producing some knowledge.
And we measure the information production from the validators in the same way that validators and other consensus protocols do the same thing. They measure the production of the other nodes. And so we use that paper as inspiration for, I think, one of the core techniques in BitTensor. So it’s interesting that that question was raised.
Yes, yes. Thank you, Alzaro, for the question. He has a better question that I’m going to ask you guys next. And that’s about, you touched on this a little bit. I was going to, actually, before even that, some of the, myself and George here in the audience, he uses GPT-3 extensively, and sometimes just for idea generation. But I was really impressed when I first saw what it can do. And then also, I really look forward to GPT-4 coming out. I’m wondering, how is BitTensor, what you can do with BitTensor different from something like GPT-3?
Jacob Steeves (54:19):
Well, it’s actually the same. We are working on the same problem. And, you know, it’s not so much about difference, it’s more about better, about pushing the performance of that core technique, which is unsupervised learning, where you’re taking large, unlabeled corpora and refining it down to the understanding of that corpora. GPT, it does that really well. But there’s GPT-1, there’s GPT-2, GPT-3, right? They’re going farther and farther down the same dimension of understanding. So much so that the models, you know, get, they have the ability to, say, generate text, generate long sequences of understanding the meaning of entire Wikipedia pages. That’s where we’re pushing with BitTensor, right? We want the scales so that we can exceed the GPT models. We’re essentially in our own domain. And if you look at what OpenAI did, is they built a company on top of that core model.
And the reason they can do that is because it’s an in-specific model and works on general knowledge. You can fine-tune it, that product, to solve all of these other problems in the same domain. So that is why it makes sense for us to work on that problem from a collective perspective.
Because then the product’s general. We can all use it. And that’s why we work on that problem. And it allows us to accrue a lot of value to just the single product for the entire network. If we can understand language, if we can understand audio, if we can understand image, all of the other problems become sub-problems of that in the domain. So let’s say understanding semantics or sentiment analysis of a tweet, all those problems derive from that core proposition. It’s what OpenAI is doing. So yeah, we kind of are a GPT 3 model. But we’re bigger. Well, we’re not yet, but we’re getting there. Like, general estimate is somewhere around 200 billion parameters now. And we don’t exactly have perfect information about the size of the network. That’s one of the interesting things, because we don’t have all of the weights. But it’s getting to be that scale. Me and Carol, who’s in this call right now, we were talking today about, we were doing some off-the-cuff calculations about what we think that we can train with BitTensor. And we think we can get to maybe like an 8 trillion parameter model and actually train that, which would be 10 times bigger or 20 times bigger than GPT 3.
That’s what we’re pushing towards. We want to be that magnitude larger than these large AI companies. And we want them to come and join us. We want them to help us. We don’t really see ourselves in opposition to these other companies. We don’t need to make another AI company that was fighting against OpenAI, but better and more open or something like that. No, no, no, no. We want them to come and use our infrastructure and work together on this sort of global humanistic pursuit. So for context, GPT 3 is about 175 billion parameters. Constantin, if we did, you know,
traditionally speaking, if we’re doing 10 billion, you’d be hitting probably 57 times that model. So that’d be really, really amazing once we get to that point. But for now, it’s only 75 billion parameters for one GPT 3. Considering we’re all kind of stitching everything together, we’re at least approaching that size. So I just wanted to confirm that number. So what is the thing that needs to be done for this network to be bigger and more powerful?
Is it more people joining it and contributing compute power? Yeah, it’s that. It’s also about letting the consensus mechanism or the incentive mechanism
Jacob Steeves (58:25):
sort of reach equilibrium. Because we just started, because we’re just beginning to understand this technology, there’s a lot of disequilibrium in the relationship between the number of tokens that we’re minting, the price of those tokens, the amount of compute that people are putting online. And we need to fine tune that incentive mechanism to make it so that people are going to buy these GPUs. And we get that hyper-efficient market system that you get with Bitcoin, where the incentive mechanism is driving people to create mining farms and buy GPUs. And we’re kind of going through that process right now. And I know a lot of people on this call are actually buying GPUs and hosting them on data centers, which is exactly what we want to be happening. Every day, the network gets stronger and stronger and stronger and stronger. Maybe we have 1,000 GPUs today, but we want to have 10,000 next year and 100,000 the year after. And that’s that kind of accrual that’s occurring.
Then what do we do as the foundation? Because the way it’s set up is basically there’s like three groups. There’s the foundation, which is a non-profit that is guiding the incentive mechanism. And we intend to organize a decentralized autonomous organization in there that governs the way that we move the incentive mechanism. And then there’s also the miners that are running the infrastructure, the machine learning infrastructure. And then I guess you might even say there’s a third category that’s like the clients of the system.
And all three of those groups has power in BitTensor in the same way that in Bitcoin, you have miners, developers and liquidity providers, people that buy Bitcoin.
We have that same dynamic. And it’s very fun trying to optimize this system so that we’re all aligned together and we’re doing the right thing with the incentive is going towards producing this digital commodity in a way that we can extract properly and turn into real value. We’re an AI company. We’re also not. We’re also a collective of individuals. We’re aligned with a DAO. We’re an infrastructure company, but we’re also not. We have individual mining companies. There’s product companies. It’s an ecosystem. And if you look at a lot of other products in the space, in the digital currency space, you’ll see the same thing. There’s so many different individuals and organizations being aligned by a core mechanism. And this kind of harkens back to what I was talking about before, how there’s a decentralization of power. And that’s a good thing if we want to make sure that the value that’s created by this technology is flowing out to the individuals equitably. We need competition between the individual components. We need fairness. We need democracy integrated into the core technology so that this behaves
good for us all. Absolutely. Absolutely. So am I correct in assuming that the reason that I see vTensor having a chance to easily blow way past something like GPT-3 is because individuals can contribute compute to the network and also that they will have the incentive to do
Jacob Steeves (01:02:14):
that. Is that right? Yeah, that’s really important. I guess you could say there’s the ability for us to seep into compute that is unused. That’s valuable. But you get that from altruistic computing. You could say latent computer power exists out there. And because we have a protocol which is borderless, then it can all be used. That’s a huge thing. And that’s what makes a study at home even large compared to what paid university clusters can achieve. Right now, we have a lot of people that are moving away from Ethereum after the merge with these GPU mining rigs. And we can pick them up. OpenAI can’t. And so that’s a big part of what we do.
And another aspect is just the competitiveness of incentive mechanisms over the competitiveness of a top-down bureaucracy, where in a top-down bureaucracy, you have individuals that make decisions and those decisions don’t always have feedback to the bureaucracy themselves. Did we get the right type of GPU? Was there a cheaper way to get that amount of RAM and that many flops? So that incentivization creates this hyper-competitive system, and it did the same thing in Bitcoin, where we have all of a sudden we have innovation occurring underneath this protocol that maybe nobody thought of doing. Right? Oh, great. We defined that we wanted this information-theoretic objective to be minimized, and then people go out and solve it. And if they can do it with RTX 390s or A6000 Nvidia chips, or maybe they just sit behind their keyboard and answer by speaking into their mic, that’s all possible underneath an agnostic mechanism. So we think we’re tapping into that quality as well. And then I think the final thing is that we have a value proposition, a new way of creating an AI company that’s aligned, a new way of creating an internet for people that is bigger. And I think there’s a lot of people that just want that to succeed, and I don’t think we should understate that value proposition as well.
Absolutely, absolutely. So the distributed computing with incentive for people to join it, and the value is distributed, as well as the controller authority over it, which I suppose we would use the word governance. And yes, so I took the blockchain, I mean, you guys have already mentioned that in order to, okay, let me say this, is it fair to say that you’ve made basically machine learning or AI compute power a commodity, basically? Is that right? And we’re not the only people to have done that.
Jacob Steeves (01:05:25):
You know, Bitcoin commodified compute power, but they commodified a form of compute power. That was, you know, the ability to do hashes and therefore produced ASIC chips. Ethereum needed GPUs. And I forget the algorithm they were solving with that, I’m not sure if it was just a shock at 256, I think it was something else. It so commodified that. Filecoin is trying to create commodities around space. There’s a lot of projects in this space that are doing something almost exactly the same. Right? You define the objective. But what we are doing, and I really don’t want to say that we’re commodifying compute, because it’s the wrong level of abstraction to think about what we’re doing. Because we measure the information produced by the miners, that means that we actually measure the thing that’s valuable in artificial intelligence. And I think that’s really, really important. You know, you get what you measure at the end of the day. Like, if you want, the quote in the paper is, if you want ants to come, you put sugar on the floor. Like, if we were to measure compute, we’d get maybe a compute network. But that’s not what we want. We want an intelligence network. So that’s why I would not say that it’s incentivizing computing. But that’s a proxy for the same thing, right? Turns out that intelligence is the intersection of, you know, model, data, and compute. So by proxy, yes. Okay. And what do you think, I mean,
intelligence itself, you know, there’s all these questions, at least in public, about AI and sentience. And recently, I find that that question is not, that intelligence is not like a binary state that is like, yes or no, that it has degrees. And that that because of that, it doesn’t, it doesn’t make sense to say, like, maybe in retrospect, we could look back and say that, okay, we can overall say AI became sentient between 2015 and 2000 and like, whatever. But that it is, it is not something where it’s like a yes or no. What do you think? There’s, yeah, intelligence is, is not even well defined amongst people that, that, you know,
Jacob Steeves (01:07:40):
have PhDs in it. It’s, it’s a, it is not a defined term. So like, we can each kind of have our own definition of it. We make a very description, we’ve, sorry, we make a very specific description of what we are trying to increase in, in, in the, in the paper. Right. So we define it, right? It’s like information, theoretic value against an objective. But that’s maybe not intelligence. I mean, Ala, what do you, what do you think? I think that, yeah, it’s a bit of a dicey field
to answer. Because now we’re delving into philosophical concepts, defining what intelligence actually is and what it isn’t. You know, if intelligence is as simple as making a decision, then really three lines of code, if and else, will really be qualified as intelligence. Yeah. But if we’re talking about sentience, which is a whole other ballgame, that’s kind of another, again, another kind of philosophical concept to work with, because, you know, you kind of start get them into questions and things like, you know, what is sentience and what we consider to be sentient and so on and so forth. The only thing that I can tell you, though, is that as, you know, AI scientists and engineers and stuff like that, you know, buzzwordy things like singularity and, you know, achieving artificial general intelligence and so on. They’re all really great, lofty ideals. But we’re concerned with the here and now. And the here and now is the problems of the tensor itself. And the problems that we’re trying to solve today, which are knowledge compounding, you know, decentralization, you know, ownership, accessibility and so on. Those are more important for now. If we achieve the scale of the tensor that, you know, that we are projecting for ourselves, then, you know, creating AI that is perhaps, you know, almost mirroring human sentience, it should not be a big stretch of the imagination.
But for anybody worried about Skynet, try changing the location of your Roomba and then ask it to go back its home base. Really enjoy the popcorn at that point.
Right, right. Gosh, this is very exciting. I like that it’s so democratic because, you know, my mind has been like asking the question for a while now of like, okay, so AI is really powerful and how do you ensure that, you know, the development of these things are safe? Because as far as I can tell, basically, the people with the resources, I mean, they’re doing whatever they want. There’s not really any kind of, you know, oversight or I don’t know if regulation is the right word, but I’m basically like, there’s no like overall control, like, you know, not, you know, and at the same time, you know, I would be against like legislature to that would cause and say innovation to be significantly slowed down. At the same time, I feel like these things, you know, need to at least there needs to be some kind of a oversight as to the development of AI in general in the world. How do you feel about this? And beyond that, there’s one other question from Al Zorro. Thank you for your questions that I will ask. And then we can, I think we have already gotten into it. I was going to, you know, ask the other question and then to ask you guys to talk about concerns in the, in artificial intelligence in general. Some of those concerns are things that some of the concerns like centralization of that power and authority are, of course, things that we tensor solves. So let’s see what the question, I’ll just get into the question because I’ve had you guys for an hour and 15 minutes. Is time okay to continue for a bit longer? Okay.
So the question is, and I think this is an excellent question. Al Zorro is asking, what is the most exciting projects you guys want to see built using the BitTensor network? BitTensor network. I’ll modify that by saying, what are some, because, you know, you guys are on the very inside of this thing and have probably given this a lot more thought than anyone else here. I was wondering, you know, what, what are some of the things that you can conceive of that you see being solved down the line with BitTensor that maybe some of us, myself, or some of us here in the audience who haven’t thought about this as much as you have, haven’t thought of that might sound interesting to people that was really drawn out. Does that make any sense?
Jacob Steeves (01:12:17):
I want to pull Carol on the stage if he’s here. Carol, put your hand up because I think this is a really good question for Carol. He’s also one of the people that works with us. If he’s not, he might not be here. So otherwise, I’ll answer that question. Unless Allah, you have something. Oh yeah. Yeah. Get up here. Get up here, Carol.
Sure. So definitely. Yeah. I think, I think you touched on some of it and, you know, I think it was Allah who mentioned like, you know, books being written that would be, I don’t know, that we would love, let’s say as much as we might love any, any bestseller, let’s say. There she is. Awesome. Hi, Carol. How are you? I’m living the dream, and y’all? That sounds good.
Ala Shaabana (01:13:06):
Yes. Hey, what was the question again?
Yeah, sure. The question is like, what are some of the coolest things that can be done with the Tensor network down the line that maybe some of the folks here in the audience or I haven’t thought of? You know, the one thing that was mentioned is, you know, the idea of basically having books written that eventually we might, you know, be on topics that are just completely generated, but that we, it might get so good that we might really prefer, you know, generated books over human written books at some point. But yeah, basically, the question is, what are some of the most exciting things that you want people to build using the BigTensor network or that you see the possibility of BigTensor network assisting in developing? That’s the question.
Ala Shaabana (01:13:60):
Yeah, absolutely. So I think that books is a really great example, but it’s also kind of like the easy example, right? There was a really good tweet that came out last week that showed GPT-3, OpenAI’s GPT-3, using a Python REPL interface to have it write code and then execute that code. And so it can do precise math. It can interface with Google searches. And so it really demonstrates the really, like, some of the cool, powerful techniques that these language models can use. And so because BitTensor is essentially one large mixture of experts model, you can have it interface with these different environments that will allow it to, you know, like, there could be a dedicated mixtures of expert model in the BitTensor network that you can interface with through a Python environment to where it can, like, it specializes in writing Google search queries. And so you can interface with Google, you can interface with all of these different tooling that will, because it’s a mixture of experts model, to create new types of, like, model pipelines that weren’t possible before. Also, things like stable diffusion, you know, can be added into the BitTensor network so that, you know, specific types of, you know, diffusion prior models are deployed. So there could be a specifically, like, an icon generation stable diffusion model that is incentivized within the network. So I think that it’s very promising what’s coming up here soon.
Is it fair to say that one of the most important futures is that there is all this, all of this, you know, compute power everywhere, but that you’re going to give the incentive for people to contribute? Is that, am I understanding that correctly? In that that’s the thing that’s going to, at least in my mind, give us a good chance of becoming easily the most powerful network of its kind in the world. I’m going to pass, I’m going to pass, wait,
Ala Shaabana (01:16:25):
do you want, was that question for me or Jay? Well, I mean, no, you weren’t the last person.
Anyone, anyone, if this becomes like a discussion between the three of you, that would be completely fine. I mean, if you’re on the stage, you can speak anytime, you can interrupt anyone on the stage. It’s a discussion. Yeah, we want to be bigger.
Jacob Steeves (01:16:42):
And like, we think that we’re, you know, it’s exciting. We’re already getting very huge. Like, if you do some raw, you know, napkin calculations, we think that we can, you know, truly scale to be probably the biggest neural network in the world. Probably the have the largest amount of GPUs collated, especially on an AI problem. And that wouldn’t even be remotely crazy, because if you look at and other cryptocurrencies that are that use GPU mining, a lot of them have way more GPUs than even, you know, OpenAI is using. It’s like, it’s quite insane. The ability for the cryptocurrency industry to throw compute around or like, I would say, not even throw it, like suck it up into these protocols is truly astounding. And, yeah, that’s something we have to take advantage of that compute. And that’s our like mission. That’s probably one of the scary things about what we do is that we have this great potential and we need to soak it all up and we need to use it properly. So when you ask me, like, what are the problems that need to be solved? You know, anything in this network that helps us collectively get to where we need, because in the core of the technology is competition.
It’s great that that’s there because competition is the, like things need to be based on at least some modicum of competition so that there’s fairness involved. But what really will take us up to the high levels of performance in artificial intelligence will be collaboration in and around that competition. So the rules that help us, you know, collectively, you know, train this model, you know, signaling to other miners what information needs to be worked on in the network. These are kind of things that we need to build on on top. We focused on this base layer. The base layer is laissez faire, right? Capitalism, laissez faire capitalism. But we need to now build on top of that. And a lot of this will come from probably the foundation, because we sit in a position where we kind of are altruistic with the network.
And so we need to build this kind of stuff. But, you know, people in the community can also do it. Right. And this is already happening, like a lot. Like, we just have effectively random people on the Internet. They obviously are not random anymore because we’re all working together as a community. But just people coming into the system and like, OK, what needs to happen? Great. We need better tutorials. We need this tool. We need a wallet. Like, everything just starts getting built out around. And that’s one of the things that just blows me away about this space is the ability for it to kind of collect amazing people and organize them to work at this goal. So whether that’s wallets, tutorials, videos, helping people, building channels, OTC channels, exchanges, liquidity providing, market making, application development, tools for sharing models and explaining what’s happening across the network, explaining how many parameters, what is the parameter count, all of that kind of stuff that can be integrated, can be worked on. And people are doing that. And for me, that’s like the thing that brings tears to my eyes every time I see that. Absolutely. Gosh, gosh. Well,
thank you guys so much. Now, in retrospect, I feel it would have been good for me to do a bit more reading on BitTensor before this space, because I really just spent some time today. But I wanted to point anyone in the audience to BitTensor.com. Specifically, I mean, just starting from ground zero, which is where I was on the learn section of the site, it’s really useful. Of course, if you’re a developer, or if you’re more technically proficient, taking a look at the white paper, we use the white paper. For people who are completely unfamiliar here, the white paper is the technical paper that describes what a project is. So, coming up here on the closing of this space, you know, I had a space that tried to, a couple of weeks ago, that tried to kind of get across to, in layman’s terms, what the concerns with AI are and how, in the public’s mind, it should at least, they should be aware that this is something that there are big concerns. I think, Const, you mentioned a bit earlier about how we are doing things that could potentially, down the line, be catastrophic. And that’s kind of comparable, in my mind, at least to, you know, if we were to begin, you know, if we lived in those times when the possibility of making a nuclear bomb became real, and then in those days, then, you know, I feel like it would have been good, maybe, for the world to even, you know, kind of come together and to, I don’t know if this would ever happen, but to come together and kind of, you know, have a consensus on whether we even want to go in that direction. And that now, so far, you know, we haven’t destroyed ourselves, but were it to happen, I feel like, you know, we’re there to have been a nuclear disaster and, you know, 80%, 70% of humanity to have died. I feel we would have been here after that, and thinking that the time to have prevented it would have been in the very early development stages. So, to close this space, I was going to ask all of your thoughts on basically what do you feel, because you’re as insider as it gets, and you are an expert perspective. What do you feel are the, you know, concerns with AI? And, you know, should we, at some level, at least, be concerned? What is it that anyone can do, if anything, about basically trying to make sure that development of AI doesn’t end up being catastrophic for us in the long term?
I think, Constantin, you just had a chat with Shalini Kintya about this. You might be the best one to answer this one. Yeah, I mean, there’s different levels to it. There’s
Jacob Steeves (01:23:19):
like the most obvious abuse in AI. We’re talking like bias, where the machine learning models are, you know, filled with hate speech and pornographic images and all that. Again, I think that is a legitimate issue that should be overlooked. It’s like, you know, layer one, right, facing us today. And it’s also very much out in the open. It’s, and the reason, one of the reasons it’s so out in the open is because it’s a great one for corporations to focus on, because it kind of, like, it feels kind of like external to them. And they, like, on many ways, they hire all these engineers and researchers to work on that particular problem. I think that the one that concerned me for a very long time was just the ability for, you know, AI to be able to, you know, just the ability for large corporations to abuse artificial intelligence to manipulate their clientele. In the case of, you know, YouTube algorithms and, you know, Google searching, you guys have probably watched that documentary about that, you know, about Cambridge Analytica.
I forget the name of the documentary. It’s slipping my mind. But there was that famous one. And the, just about how the AIs can can manipulate people, and they can manipulate people’s political opinions, and they know us better than ourselves. Like, they’re very, very good at using a lot of data to understand who we are in a massive, in a mass sense, and then gravitate content towards us to basically be hyper sycophantic creatures of manipulation. I think that that’s a reality today. Then, like, maybe, I think, and then tomorrow, you know, I spoke on this, it’s like, we have this incredible technology is seeping into every aspect of our lives in every industry, and it’s going to be a trillion, trillion, trillion, trillion dollar company. So, do we want that to be a single company with a single CEO and or run by a single organization? I think that that’s a big question. So, do we want that to be a single company with a single CEO and or run by a single organization? I think that that answer should be pretty obviously no. We don’t want that. We don’t want that hyper concentration of wealth to occur. So, and that concentration of wealth then falls into the concentration of power, which obviously would, you know, come back to us using AI with technology you know, enslave humanity if we’re going to really go far out. And then I think the final one, the top one is like, yeah, AI taking over and building robots and then humans are dead. You can say like, when AI becomes no longer symbiotic with humanity whatsoever, and that’s what they talk about in sci-fis. All of those problems exist, and I think that Bitfizzer has kind of a unique answer to all of them. We need to work on every single one of them. We need to have an answer to every single one of those problems because they’re going to come towards us at different time scales for this project. A hundred years, you know, one year, five years, ten years, a hundred years, a thousand years, all those things need to be thought about. And when it comes to the first one, like hate speech and abuse, you know, we, the validators in BitTensor are defining the datasets.
And that is a consensus that’s run with consensus. So like for the first time in the world, actually the objective function is decided via consensus in a network. And I think that’s good because it means that people, if there’s a distribution of ownership of that dataset, that we can collectively or even democratically decide the answers to these hate speech problems. That was the first one. Moving forward, the abuse of incorporations of the individual, I think that if we understand the technology, if we’re making it as a collective, then we will understand the ways in which it manipulates us. Maybe cynically, the fact that we can also profit from BitTensor means that, well, hey, if it’s manipulating us, maybe we can at least profit from it too. So that’s great. The centralization of power, I mean, I think that’s obvious, right? We’re building the DAO, the Decentralized Autonomous Organization. With that in mind, we really want this system to be controlled by a lot of people in a very democratic way, as democratic as possible. And so centralization of power, we should be sort of antithetical to that quality. And then finally, the long-term sci-fi looking 10,000 years, or maybe it’s not so far into the future, of artificial intelligence detaching from humanity and becoming a threat. It should be very clear that we are actually lodging that power over the AI in the way we build it, right into the hands of individuals. We are holding the keys. We got the carrot with BitTensor because we control the consensus mechanism. We control the incentive mechanism for the AI. So that will help us potentially build a system which is very symbiotic with humanity, nested into the organism or ecosystem of humans so that there’s that symbiosis and we have a natural growth. Yeah, that would be my way of thinking about it.
Yeah, gosh, very thorough and actually very concise at the same time, because there’s immediate concerns today, and then there is those future concerns. But let’s say if those concerns won’t materialize for 100 years or 200 years or 300 years, it would make it the same as the climate change, right? It’s not the case that a majority of humanity is going to die like next week from climate change, but that there’s things happening today where it makes us realize that, wow, this is potentially an extinction-level problem and people recognize it as that, right? I mean, I feel like people, or at least relatively connected, educated people would say, would rank climate change along the same lines as nuclear weapons, but I feel like society is not quite there with AI, but that AI should be there. Ala, any thoughts?
No, I think Const really kind of hit that one on the head. Can’t really, this is one of the first
times where I don’t have anything to add. All right, and Karel, since you’re on the stage, any thoughts on basically concerns with AI? Yeah, I think Jake did a fantastic job expressing it,
Ala Shaabana (01:30:08):
and I think that the best way to think about alignments is to think about it like a steering in a car. If you didn’t have steering, if you didn’t have brakes, if you didn’t have gas, then you’re not really in control, and that’s very scary. And so I think alignment’s important, but making sure that we’re not trying to optimize towards outcomes, but ensuring that we merely can steer is really what’s important.
Yes, yes. Alignment, yes, that is one of those words where I was, okay, you can see how, well, already in AI systems, there is a misalignment that starts developing. Oh, my goodness. So this, first of all, just a quick question, what is the cryptocurrency that is in the blockchain aspect to all of this? Is it Tau? Is that right? Yeah, it’s called Tau, that’s right.
Okay, it’s called Tau. And then this, I think now I did some of my, I suppose you call it vetting while this space was run. So actually, I trust that this particular project already, and also given who you are. So I did see that just for the audience’s sake, this cryptocurrency had a fair launch, and based on what’s been shared here, we should have a clear idea of what the reason that it is necessary is and what it is. And gosh, I have a hard time thinking of any projects I’m aware of that is on the grand scale of humanity more important than this. So it all sounds very exciting. I really appreciate you guys’ time and sharing all of this. Before I close out this space, I was just going to ask you to share anything that you want to share, given that now I trust you, of course, and this project. So it does not really matter to me what you might feel about the project or any really thoughts or comments. Let’s do a closing round of thoughts, starting with Const, and then we’ll go to Alain, Caro, and then we’ll close out this space. Thank you guys. Thank you. Really appreciated the
Jacob Steeves (01:32:27):
opportunity to come here. A really great conversation. Amazing questions. I think we covered so much. The cryptocurrency, yeah, you touched on the major points. Tau is the metric of intelligence, and we use it to run the alignment for the entire system. I think that from the beginning, our philosophy was that what we wanted to build here was a high-performing mining network. That’s why we didn’t do a pre-mine. It’s because we wanted every iota of value to flow through the computational aspect of BitTensor. That’s what’s happening today. That creates this weird situation where there’s so much value to be captured by miners. I think that everyone here who doesn’t know about BitTensor, that’s where I would hope you guys come in. Come and mine with us. Get into the system from the supply side. Also, if you’re interested in purchasing a token, I think that’s obviously possible. We focused on the supply side because that’s really what governs us. It’s really fun. Carol and I were talking about how much fun we were having today running these miners. It’s wicked
fun. Can I quickly interrupt you to say there, because I don’t remember or I didn’t get this detailed in my study. What are the hardware requirements to mine this? I know that you started running this network in 2021. I suppose early, even though that is a relative term. Would regular computing hardware be able to mine this? I imagine because we are now in 2022 and Bitcoin was 2009, that you’re not going to see the same slow rate of miners jumping on or adoption or all of that because now this realm of decentralized tokens, let’s say, is well established. Is there still a chance for a regular computer to do this?
Jacob Steeves (01:34:53):
Let’s just say that when you look backwards on an exponential curve, it always feels like you get so much opposite of FOMO, like what regret, like, oh God, I should have run more computers in the past. And that seems to be the case always. The amount of compute required changes every day. And that’s good. That’s healthy. If you’ve got a GPU, you can get into BitTensor. So get yourself one. And it needs to be NVIDIA, right? Yes.
Yeah, we don’t know. We don’t support anything else just yet.
Awesome. Awesome. Awesome. So it can’t stand anything else. And if not, we’ll move on to Allah.
Jacob Steeves (01:35:46):
Not a must.
Cool. Yeah, thanks again for hosting us. Honestly, this was really fun, really interesting conversation. And as Khan said, really, really good questions. A lot of them are very thought provoking. I love that we almost delved into philosophical territory with the singularity there. And that’s always a really fun minefield. We should probably dedicate all their spaces just for that. As for BitTensor itself, the one thing I wanted to mention is that, I’m sure as everyone in this call has noticed, Constantin and I, Kara as well, we’re all artificial intelligence engineers. Some of us have a bit of a crypto background, but the main point of the blockchain is to incentivize knowledge production. It’s not really a monetary value thing. It is a secondary thought. So as a result, it’s the one thing to keep in mind that’s really going to differentiate this token from really most others. And that’s the reason why we’re looking for true utility here. This problem of incentivization is really only solvable by a blockchain solution as opposed to something else. The only other thing that we could think of at the time was charging people via Stripe on a credit card, but obviously not the best approach here.
Okay. Yes, absolutely. Of course, given there is like 15,000 tokens, I think this is the one of the, you know, this project, just like looking at it and what it is, it is one where a token is absolutely necessary. And when it comes to utility, actually, I would say it’s hard to have utility that is bigger and more meaningful and more consequential, really, than this. So if anyone is here looking into, you know, getting into any kind of mining, this project is how old now? Less than a year, right?
Yeah, we’re approaching a year in November. So not quite.
A year in November. Yes. So definitely, you know, again, BitTensor.com would be a good starting place. Ala, did I cut you off?
No, no, that’s all. That’s pretty much all it. Yeah, really looking forward to seeing everybody in Discord and, you know, obviously any questions you might have even after this call, feel free to ping us or the developers there as well.
Definitely. Look, Ala, I’m as excited as anyone about those conceptual or philosophical questions of like, you know, what is intelligence or what is consciousness and things like that. But I feel like they kind of, they do need their own dedicated spaces. And actually, so the two weeks from this week on Thursday night is the topic of the regular AI series that I run is going to be consciousness. And we’re going to, you know, talk about that a little. Caro, any final thoughts to share with us?
Ala Shaabana (01:38:35):
I just want to say this was an excellent space, hopefully very informative for everyone. I had a great time. Hopefully Jake and Ala also had a great time. You know, I think the one thing that I want to say about BitTensor that I really love about it so much is how productive it is. You know, when you think about like accumulating resources, like digging like rare earth metals, like lithium out of the ground and processing them and putting them into batteries, right? This is an incredibly productive procedure that is done. And I think that the way that BitTensor incentivizes intelligence is very much in that same vein. And so, you know, it’s really exciting to see the network grow the way it has. And to know that it’s producing something that is incredibly valuable and incredibly productive to the betterment of mankind or humankind is fantastic. So, you know, hopefully everyone feels that same way. I know the team does so. And our community is incredible. So it’s really exciting what we’re building.
Definitely. Definitely. Thank you so much, Gosha. You know, I was looking at some tweets earlier a few days ago between when Constantin and I scheduled this to now. And, you know, I’ve seen tweets in replies to my promo, or I don’t remember exactly where, but saying things like, my gosh, BitTensor is going to solve all of humanity’s problems, just like these things, these statements that sounded very exaggerated. But now realizing what it is, I’m like, well, yeah, sure. More than any other project I can think of, you know, that’s true because this is, Gosha, to people here might be familiar, more people here might be familiar with GPT-3, you know, similar problems. So basically, you can imagine that kind of a system and what can be done with that sort of AI. And then the possibility that it’s going to be incentive is necessary. Look, I wish that we were altruistic. I see it in medicine, right? So in medicine right now, our physicians have the incentive to see the most number of patients or to do the most number of procedures. And this is how the compensation system is set up, right? So now, while I would like all the physicians to become saintly and to not have that affect patient care, it’s not really the case. And so in my view, like the only way to fix this is to align the financial incentive with better outcomes and well-being as opposed to with the most number of patients seen or the most number of procedures done. So in a way, it’s the same with this kind of a network, because yes, SETI was big and there is altruism in humanity. But really, in order to accelerate something, you really need to change that financial incentive and align it with something really good, like what BitTensor is doing. So a very exciting project. I’m at the very first few hours of looking into this. In fact, the next thing I was going to do was to install it on the Mac. And then, but I don’t have, I think I have the Intel GPU. But given that and stable diffusion, I think it’s time for me to have a desktop again, which I used to have years ago for having fun, I suppose. But now I think I will actually do that for those two things. But then relatively speaking, I mean, at least in terms of our generation, yes, it’s awesome. But then really, when I think about the kinds of problems that can be solved with a GPT-3-like system, that could be so much more powerful for all the reasons that we have discussed here. I mean, that is really just mind-boggling. So thank you guys for your time. And I appreciate you being here. And I appreciate that the nature of this conversation was way more calm than what I’m used to in my discussion spaces. So I definitely will be doing more of these. And again, bittencer.com people to get started on the learn section. And there’s the white paper on that site too. I would not sleep on this people, you know, and I say that from the perspective of having been told about or just in some way coming across Bitcoin in 2012 and a half, I think, or 2013, and then sleeping for the next three years before digging in further. So if you’re remotely interested, I would do the read and to learn about BitTensor. And hey, one day you’re going to come back here. And if I’m still alive, you’re going to thank me for a lot of good that’s going to come out of it. And I love that that good is not just for you as a person contributing to BitTensor, but it’s actually, this is what we need exactly. This solves so many of the issues with AI. You can imagine, and I don’t want to badmouth a company or anything like that, but you know, there are, if we were, if as a society or as the audience or as a population, if we were to vote on some of the things that we could change about Meta’s algorithms on Facebook, so that they would be, you know, have some kind of a positive bias, for example, not to automatically just propagate content just because there was an initial reaction to it, to consider it from a perspective of is that content actually like, you know, in some way determining if it’s healthy content or not, and have that factor in, I feel that if we had governance and authority over something like that, we would do it. We would do it for ourselves and our children, so as to lessen or, you know, or make that manipulation of, you know, those AI algorithms on humanity, either make it more positive or to lessen the negative aspects of that. What I’m trying to say is that we would do that if we had governance authority over something like Facebook, but we don’t. So really, in AI, there is no other way. That’s the way that it has to be, and that’s what BitTensor is. So thank you guys so much, and I certainly will be learning a lot more and hope to keep a closer eye on what you guys are doing and hopefully join the force.
Jacob Steeves (01:45:27):
Amazing. Really looking forward to having you part of the community, man. Come join the Discord, talk to us. There’s a lot of conversations left to be had here, a lot of things to be designed and engineered around these solutions, so I feel like you’d have a lot to add. I’m looking forward to that. I appreciate that. Thank you. And there’s a Discord link on the
website, too, right? I think there was. Yeah. Awesome, awesome. So the point of contact with the project and to start to get involved, it’s BitTensor.com. I usually look for, like, what’s the easiest to deliver to an audience, so BitTensor.com, folks. And see you guys next time. The next space I’ll be running is Thursday night is the regular AI space. It’s going to be about generative arts this time. Just a bit further, I’ve been playing around with that, having a ton of fun. We’ve talked some about it today. And then next Thursday is going to be consciousness, as much as it might be difficult to define. So consciousness in relation to AI. Thank you guys so much, and see you next time in this space down the line. Have a good day.
SUBSCRIBE THE BITTENSOR HUB FOR EVERYTHING BITTENSOR!
This Clip was recorded in Dr.M’s Twitter Space Chat on Sept 19, 2022
Host: Dr.M Speakers: Jacob Steeves (Const), Ala Shaabana, and Carro
Join Dr.M for deep dives into AI & Bittensor: Every other Thursday | 5 - 6 pm EST
Subscribe to the The Bittensor Hub of everything Bittensor!
Bittensor is an open-source protocol that powers a scalable, globally-distributed, decentralized neural network. The system is designed to incentivize the production of artificial intelligence by training models within a distributed infrastructure and rewarding insight gained through data with a custom digital currency.
Discord: https://discord.gg/Qv3fxVaXyE Website: https://docs.bittensor.com/ Whitepaper: https://drive.google.com/file/d/1VnsobL6lIAAqcA1_Tbm8AYIQscfJV4KU/view Network Map: https://bittensor-explorer-staging.netlify.app/
Socials: Bittensor: https://twitter.com/bittensor_ Dr. M: https://twitter.com/DoctorM_DO Ala Shaabana: https://twitter.com/shibshib89 Jacob Steeves (Const): https://twitter.com/unconst1 Carro: https://twitter.com/0xcarro
HASHTAGS: #BITTENSORTAO #BITTENSORMINING #BITTENSORCRYPTO #BITTENSORNETWORK #AI #artificialintelligence
Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.