Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.
Jacob Steeves (00:46):
Ala Shaabana (00:50):
Cool, cool. Can you guys hear me?
Jacob Steeves (00:52):
Yeah, we got you loud and clear.
Ala Shaabana (00:54):
Hello, everyone. Hope everyone’s having a taurific Thursday. All out, Const, are you having a taurific Thursday?
Ala Shaabana (01:07):
Very much so, I guess. Big day today.
How was the food?
Ala Shaabana (01:14):
It was actually excellent. Thanks a lot again for the community. Timo, Terrence, Scrimp, and I know I’m missing somebody else. Wait, hang on. I can do this. Zoro. My apologies, Zoro. Took me a while there. But it was amazing food. Thank you so much. Really much appreciated, guys. You guys are really what makes this community so amazing.
So I guess we’ll just jump straight into it. Big news this week. We’re coming up on some big updates. Seems like the original decision was to migrate to Polkadot with the Finney hard fork. However, there seems to be a change of plans. All out, Const, would either of you like to speak to that?
Ala Shaabana (02:12):
I guess I’ll take this on. Apologies. Can you repeat that one more time?
So the original plan was with the Finney hard fork chain upgrade, the entire chain was going to move to the Polkadot ecosystem. Is that correct?
Ala Shaabana (02:34):
Sorry, guys. I’m having a lot of trouble hearing, folks. Can you give me just one second?
Jacob Steeves (02:38):
I can answer that question then in the meantime, Mac. Yeah, so the plan was to go to Polkadot. And it still is. It’s just that the time frame here has been changed for the project. We needed to do a lot more investigation around what the limitations of working within the Polkadot ecosystem would mean for BitTensor. And that process was taking a fair amount of time for us because of the nature of Polkadot being kind of permissioned as a blockchain system. For instance, we ran into, back in January, we were hoping to launch Finney, but we ran into some issues with our slot that were very minor. And then we were delayed by some time. And that’s how we got started.
And that happened a number of times. And in many ways, they were our errors.
I don’t want to make it seem like Polkadot as a technology is somehow flawed. It just is quite difficult to use, or it was difficult for us. And it didn’t feel like the right direction to go when it came to, last week, us trying to launch on the Kusama. We ran into some more issues that delayed the chain. And it really made us think about what we wanted to create here as an ecosystem. Did we want to be part of something that was going to hold us up for two, three weeks, months at a time? And there was going to be some limitations in the Polkadot ecosystem also about the computational side of BitTensor, like the size of the subnetworks that we could run. And that was just a function of the amount of compute that we could get onto a single block.
So in the end, we decided that all in all, it was better for us to move more swiftly, to move without permission, to be more independent of a system that had these pitfalls for us as a project. That doesn’t mean that we will necessarily always be away from Polkadot, or we will never connect into Polkadot in some way. It just means that right now, it doesn’t suit our near-term goals as a project.
Primarily, what we want to focus on is a lot of machine learning, a lot of artificial intelligence, over, say, integration into that ecosystem. So we picked the former over the latter as our priorities as a project. And so that’s why we pivoted here. And the upside is that it allows us to just move really fast. So next week we can launch Fini, hands down, let’s go. Let’s get subnetworks, let’s get staking, let’s get delegation, let’s get network connect, let’s build registration networks, let’s build multimodalities, let’s have stable diffusion, let’s have all these things. Instead of waiting for us to work with the slow-moving wheel of a network, let’s just push ahead with what we see our main value as a digital currency system around artificial intelligence. So that’s, I think, as much color as I can give to that in that answer. But maybe Alla has figured out who’s Mike and can talk to us as well.
Ala Shaabana (05:53):
Apologies, folks, apologies. I’m no good. Yeah, Const kind of nailed everything on the head. Just to kind of add a little bit to what Const was saying, this is not to say that we are completely shelving Polkadot or that we’re just not going to be building on there anymore. This is just to say that we are just going to continue as a solo chain for now, and we will continue looking into Polkadot as a system to tackle the rest of our goals. So one of the things that we mentioned earlier is chain bridging. We talked about how you want to look into blockchain bridging using XCMP protocols and see if we can actually get TAO onto VEXes that way. That still remains a goal for us. That still remains something we want to investigate, and Polkadot is the way to do that. So I don’t want to disparage, and I don’t ever want to say that this just didn’t work out, we’re moving on. It’s just that for now, we are still dealing with this. And so we decided to move as a solo chain so we can get back to our nimble development while simultaneously still looking into Polkadot.
Absolutely. As many of the miners on this call know, the BitTensor network and ecosystem is extremely fast paced. As the network changes and evolves, so does the development need to adapt to create a robust and strong network for everyone.
Ala Shaabana (07:15):
So it sounds like keeping our independence is going to be very valuable as we can make these fast paced decisions and upgrades to the chain.
Ala Shaabana (07:26):
Yeah, that’s right. We want to keep our independence, especially as we move into Fini, because as everyone knows, Fini is such a massive update that we can’t even make it and just basically chain upgrade to the current existing Nakamoto. It’s just the changes are too inherent and too deep for it to be a safe chain upgrade. If we chain upgraded Nakamoto right now directly into Fini, we’re probably just going to break everything, and then we’re going to run into lots of issues that way.
So that’s why it’s still going to be a hard fork, and that’s why it’s important to stay nimble as we move into Fini. Once we know it’s stable and everything’s running as we expect, then it’s definitely going to be something looking to pushing into the DEXs of Coconut, pushing into the Xe protocols, pushing into the chain bridging, and see what more value we can get out of our models on that system too.
So all of these functionalities, this bridging and these smart contracts, are these still possibilities that we can implement on BitTensor?
Ala Shaabana (08:25):
So going into Fini today, these are things that we are still building on top of it. But this is kind of why we also mentioned that we’re going to be looking into Polkadot in parallel to launching Fini, because these technologies are pretty much already set up in Polkadot. So it would be easier to plug into that as opposed to building everything from scratch from our side.
Given that we will no longer have the Polkadot collators and added Polkadot security, is there any changes and adaptations that the team is making to increase the levels of security on the chain?
Ala Shaabana (08:59):
Yeah, so there’s a few things I’ve already introduced in Fini that increases some security, but there’s no way around it. We are going to be losing out on Polkadot’s shared security system that they’ve got set up just because my system is so massive and so intricate that Fini won’t be as large as Polkadot on launch. This is not going to happen. But at the same time, launching into Fini will bring its own security upgrades on top of whatever exists on Nakamoto anyways. So we are losing on Polkadot, but we’re also bringing our own security on top of Fini, which we’re actually going to be releasing more information on as that comes in.
That’s super exciting. So as we discussed in the last call, the Fini upgrade is going to bring us a lot of new features. It’s going to bring us delegation. It’s going to bring us a slight change in validation. It’s going to bring us subnetwork capability. As many of us know, registration has become quite difficult on the network. So there’s potential for a registration subnetwork. Ala, would you like to give a brief re-overview of how delegation works, what our subnetworks are going to look like, how we are approaching the registration problem?
Ala Shaabana (10:19):
Oof, I think Constance is a much better person to describe this, but I’m going to give it a shot from my side. I’m going to give you a nice explain-like-on-five way to do it. Effectively, what delegation does is it’s going to be sort of a form of pooling. It’s going to allow people to delegate their stake to a specific validator, and that validator’s job will be to basically kind of stay honest on the network and actually contribute to the network’s overall quality of embeddings. Now, the nice thing about it is not only does it have economic incentives, it also creates this environment where validators will actually have to provide incentives for people to stick to them in the first place. So for example, let’s say I’m some whale in the network and I want to start my own delegation, what am I giving back to my delegators? What am I giving to the people who are sticking to my validator? Sure, they’re getting some TAO back, but how am I contributing to the network as a whole?
So there’s more ways to hold myself accountable, and there’s more ways to contribute to the network and make it more honest as a result. And it’ll also allow people with, for example, a little bit of TAO to also be able to stake and validate instead of having to mine from scratch. So basically, as we all know, mining from scratch right now has already become rather difficult as it is. Delegation will actually allow you to maybe hold a few TAO and then be able to earn on that TAO as you delegate to some bigger validator. Now with Fini, which is, at least to me, near and dear to my heart, more exciting is the subnets, right?
So now you don’t have just one massive network or one massive monolith. Now you have small subnetworks, and they can be massive subnetworks. But these subnetworks can actually take care of different aspects, right? So you can have a subnetwork dedicated to a specific form of learning. You can have a subnetwork dedicated to a specific form of, you know, fine-tuning a fair frame. You can even have a subnetwork specifically for a bunch of chat tensors, just talking to each other and learning from each other. So it kind of gives you a lot more degrees of freedom. And it’s a lot like, at least in a way, it’s analogous to somebody who’s colorblind who kind of just put on glasses to actually see what colors actually look like. And that’s actually what’s going to be helping us realize the full potential of what we could do with the tensor, right? Because of all of these models that are coming in, treating them all on the same playing field like they have in Nakamoto is not really that optimal. It’s actually suboptimal.
Putting them into subnetworks and kind of creating this, I want to say this intricate system of neighborhoods is actually going to create a much more cohesive environment for these networks to work together and to kind of enable everybody to participate in a meaningful way and create the high quality machine learning systems that we’re looking for, these embeddings out of these models that we’re looking for. And of course, related online will also enable modalities. You can have subnets for audio, subnets for images and so on as we move forward.
Absolutely. I personally am extremely excited about subnetworks. I’ve been very privileged to have a lot of interesting conversations with some of our really talented developers, including Karo, learning about the JEA idea and… Const, I know that you are really excited about image subnetworks. Would you like to share any of your enthusiasm? Looks like we might have lost Const there.
Ala Shaabana (13:39):
We may have a little bit. It’s okay. That’s okay. Move on. At least for myself. I can actually speak a little bit to the image subnetworks myself. Frankly, I’m super excited to see what we could do with these image subnetworks, especially as we move to stable diffusion, and maybe even down the line merging them into the, you know, you can have the multi-modals, you know, you have the really fancy natural language processors with the really fancy diffusion models, and you can have something really interesting going on. Then we’re really, as Karo likes to say, we’re really cooking the fire. With gas. Sorry guys, English is my second language.
Yeah, it seems like there’s going to be a lot of opportunities for transfer learning between networks, which is really exciting.
Jacob Steeves (14:27):
Yes, absolutely. I think this is really the holy grail, and we’re starting to see this appear in the research all throughout machine learning. It’s not good enough to just work on language models. It’s not good enough to just work on visual models. You need to be combining those modalities. That’s, you know, I just was reading an announcement from Microsoft saying that GPT-4 is going to be a multi-modal model. It’s going to be an image model plus a text model. So subnets are working on the individual problems, but it’s really going to be interesting when we’re combining them.
And that’s going to happen on the API side. That’s going to happen on the validator side, where people that have tau in the network can query these different modalities and combine the embeddings to do contrastive learning. So you’ll be able to learn the relationships between images and text and do things like stable diffusion. That’s all based on these fundamental components that we’re mining with inside the subnetworks. At the same time, we can also make really high-level subnetworks that are just doing sort of customer-facing things, like chat prompts. We can have a network that is answering questions like chat tensor is. When we spoke about this last Twitter space as we were talking about chat tensor in the release, a lot of people were upset about the fact that chat tensor wasn’t built into the network.
And it was hard for us to be there, and we didn’t want to lie because it wasn’t. But the reason why we were sitting at that location was that we needed to wait for Finney to be launched before we could put chat tensor into it. So this is one of the things that we’re going to launch immediately. It’s basically a subnetwork that works with more chat-based learning. We’re going to work with a stable diffusion network. This is something I’m really excited about. I’m a visual learner. I’m a visual person. So I really want to see images generated on top of bit tensor in our subnetworks, and specifically with things like stable diffusion networks, mid-journey-like types of generation from text to image.
That’s what’s really cool for me. A lot of these projects, if you look at, say, stable diffusion or mid-journey, great, give me one image. Bit tensor is going to be able to give us 4,096 images just by holding tau and querying the network. And I think that’s really, really cool. We’re going to see a lot of diversity. We’re going to see a lot of different images that are going to come out of these different miners. That’s, for me, the really exciting aspect of this kind of pantheon of AI, the plethora of diversity that we’ll be able to generate with the decentralization and also the decentralization of ownership of these miners.
So yeah, that’s what’s jazzing me up, Mac. And this comes back to the earlier question of why we didn’t go to Polkadot. It was just like holding us up. And we have these great things to build, and we don’t want to wait on anyone. We just want to build them right now. And if we ever integrate ourselves back into Polkadot, it will be on our terms rather than on theirs.
Well, all right, that’s awesome. And like you said, I think that a lot of people are starting to realize that we need that diversity. It’s not enough just to have the one company with one model and the one modality. We need that ensemble. We need that mixture where everyone’s learning and teaching each other.
Ala Shaabana (17:54):
Actually, I’d like to take this opportunity real quick, if that’s cool with you, Constant and Mac, to kind of talk a little bit more about Chat Tensor because it does seem to be a little bit of a misunderstanding on how it works and how it’s built and what the goal of it is. And let’s kind of dig into that a little bit, if that’s cool. Absolutely. Awesome. So I think a lot of folks seem to think that we build models on BitTensor from scratch. You have a model that does nothing and then it gets trained from zero and it becomes magically super interesting and super powerful.
In the past, correct me if I’m wrong here on the dates, between 2012 to 2017, that’s how it was. You can understand a machine learning model, you just give it a dataset and you learn on it and then you’ve got your model. You’re done. But today you can’t do that. The tasks have gotten so much more complicated and simply training a model from scratch is not really enough anymore. It needs a lot more nuances, a lot more power and a lot more size. And some really smart folks have written something called pre-training, which if you think about it, pre-training is a lot like learning to read before you read a textbook. So it’s basically taking a model that already knows something and then training it to be more…
Basically training it to become really good at whatever task you want it to do. So that’s exactly what they did, basically, on a very basic level. What they did with ChatGPT is that they took the pre-trained GPT-3, which, let’s be honest, only they have access to, the version that was trained by OpenAI, and they fine-tuned it with prompts to get it to respond back and forth like a really intelligent chatbot. Now, this wasn’t really optimal because A, OpenAI could have, only OpenAI could have created the chatbot, the ChatGPT that would have done this. B, it took millions of dollars to get there.
So all this money, compute and research, really wanted to produce these embeddings, these high-quality embeddings that they want. And these are basically the outputs of GPT-3. And the higher quality they are, the more powerful the ChatGPT becomes. So ChatTensor, this is what comes in, is actually a research project that we’re using, we’re creating specifically for Fini. It’s already pre-trained on our own partial data sets and partial data sets that ChatGPT was trained on, but the main point is not to show off to the world and not to compete with ChatGPT in the first place, at least not at this stage. ChatTensor is actually intended to run in Fini to learn from all the subnetworks we’re creating under Fini. So once ChatGPT launches on Fini, and there’s some more models, it should become rather powerful and generic in that sense. And then finally, we’re going to talk about GPT-3. When there’s some more models, it should become rather powerful and generic in that sense.
Absolutely, yeah. I think that the main idea, the thing that we are trying to approach is this mixture of experts. And we have to recognize that BitTensor is still in its infancy, and while it does have a lot of capability, we are not quite state-of-the-art yet. But the research does seem to be promising. Would you agree?
Ala Shaabana (20:52):
Yeah, absolutely. It does seem to us that that is where we’re going. But this is kind of, think of ChatTensor as the first step in creating the first actual product on top of BitTensor is going to have a real application, which is really ultimately what we’re trying to do, right?
So I’d like to touch on the registration a little bit. I saw that Kantz had a new idea about recycling TAO as a means to enter the network. Would you like to speak on that, Kantz?
Jacob Steeves (21:26):
Yeah, I got a lot of shit for this because I said burn, and people freaked out. We’re not burning any tokens. We’re just recycling. And I think the fear was that we were trying to implement some sort of like pump scheme.
That’s not the intention. The goal with the burn to stake was basically to make the system less permissioned. One of the issues with the proof of work is that although you need GPUs to do machine learning, and that’s sort of isomorphic or related to the mining problem, it turns out that people with large mining firms basically began to gate the entrance into BitTensor for a fee.
And those GPUs were not being used to mine on BitTensor, and there was actually this divergence between the GPU power being used to register and the GPU power being used to mine. So the proof of work really wasn’t solving this problem for us to make this relationship between the mining and the anti-civil resistance mechanism that we had it for. So basically we made it possible for people to quote-unquote redistribute TAO to enter the network so that there could be this equivalence between the price that you would pay some mining farm to register your miners, and you just being able to go do it yourself. So people can get in the system that way as well. It wasn’t intended to stop people from doing proof of work. We liked that aspect as well. So you don’t have to go and buy TAO to access the network.
That’s why we used a proof of work in the first place so that there was no barriers for people that just had compute, and you didn’t have to go get on an exchange or use an OTC. And that really was important in the earliest of BitTensor when there was no market, and it was really hard to get TAO. So it would have been very permissioned in that sense to get into the system.
I think that eventually we can construct other mechanisms that people can use to enter the network where there’s some sort of proof for economic cost to register a miner, and we can figure out how to build an equilibrium between those so that they reach some relationship. The proof of work is roughly as expensive as it is to purchase your slot or this other mechanism that I’m talking hypothetically about.
That’s what we have, and that’s integrated into Finney, and we’re going to see how this works. One of the reasons why, to bring it full circle, always back down to the purpose of this space is to talk about why we’re moving away from Polkadot, is that we’re actively developing this protocol to make it an efficient market. And Polkadot potentially was going to put limitations on how quickly we could do that.
So if we have a great idea for a new type of mechanism that will help people register, we’d have to run some sort of pseudocall through Polkadot, and that might have been an issue. It appeared that it was going to be an issue for us going forward. So we chose the independence so that we could implement this type of mechanism that we’re talking about right now, Mac. Do you have any other questions about that, Mac? Is there any holes in my logic or explanation?
No, I think that covers it. I think it’s a pretty exciting new feature. The only other thing was, is there still any plan to implement a registration subnetwork?
Jacob Steeves (25:04):
Yeah, definitely. I think the idea of registration subnetworks is to build an effective filter where registrations in the initial network are only like an initial. So you get in there and there’s not that much economic incentive to get into that network because you’re not going to make that much money. It’s only about climbing through the validation process in the network and then getting into the top percentile of that network that you can then enter the network where the majority of the inflation is accruing.
We want to use it in that mechanism. Sorry, we want to use it for that reason. But we also want to use it so that we can build networks on top of other networks that are like the crème de la crème of a subnetwork. So we could say like, hey, what are the top 100 miners? Let’s keep that really fixed for a longer period of time so that there’s not as much churn in the miners. It might be easier for us to do machine learning research on top of those miners because they stay fixed. So we can have like a top 100 on Nakamoto that only one new peer is registered per day. That gives us a whole day of research on top of that network before we have to resync the metagraph and reconsider what miners are on it.
That’s another aspect of why we built this network connection. But I think the primary one here was to finally put into reality an idea that came from our community a long time ago. When the registration, the proof of work registration was first introduced and it was like 60 million and people were complaining at 60 million, if you can imagine that, right? Even then people were angry that this mechanism was putting them in direct competition with people that had access to large GPU farms and then maybe those people were not properly solving the machine learning aspect, the machine learning incentive system that BitDensor had produced.
And they were calling for a registration subnetwork back then. We just didn’t have, I would say, the development bandwidth to build that and so we pushed it until Finian. So that’s why it’s coming out now. It’s solving a long-term problem in BitDensor. So we’re really excited about this and absolutely happy that you brought it up, Mac.
Ala Shaabana (27:26):
I think we should also clarify, Const, just to be extremely specific because of the uproar from last time. When we say that we are doing a burn registration, what’s happening to that topic?
Jacob Steeves (27:38):
It’s going back into the incentive mechanism. So the way that we distribute tokens and then do halvings is slightly different than Bitcoin. Because our blocks are at different speeds, because we’ve had the chain switch a number of times, because we’ve slowed it down and put it back up a number of times, we don’t work on a block-based mechanism like Bitcoin does. Bitcoin was 50 Bitcoin per block and then it was 25 and 12.5 and so on. We’re working specifically at the inflation number. So it’s 10.5 million tau that’s going to be emitted at the current rate before we do the halving. So when we destroy tokens, we don’t actually take it away from the 21 million, we just prolong the period of time that we’re at the current rate of inflation, if that makes sense. So the total issuance is going to go down because those tokens have been effectively destroyed, but they just get pushed back into later in time. So they’ll come out of the mechanism later on. So we’ll get to 10.5 million in six years instead of two or three.
Great, and I think that maybe Bern might not be the best description for this.
Jacob Steeves (29:02):
Yeah, no, we have to be very careful with wording because there’s a lot of other projects in the space that have used that term for various reasons and various mechanisms and I think people are afraid of it. It’s a redistribution, so yeah. A recycling. A recycling, you might say.
Well, yeah, I’m pretty excited about the registration subnetwork. I think it’s important that we keep developing ways for complete beginners to introduce themselves to the network, to play around, to see what’s going on, so that we’re trying to access the true collective nature of humanity. There are all these niches and incredible people that aren’t being recognized simply because they don’t have the means to get through that first step.
Jacob Steeves (29:58):
Yeah, we don’t need to have a test chain now because we can build test subnetworks with 16,000 or 64,000 UIDs where people can play and investigate how the system works and make small amounts of inflation before moving up the league, so to speak, into the more competitive systems. That’s something that we’re really excited about. It’s something that we were hindered in with the Polkadot system primarily because they were going to limit the time we had on blocks. So 64,000 UIDs was actually out of reach for us until we had basically re-architected our chain. So now having our solo chain, our independent chain, allows us to move up to 64,000, potentially even higher. I don’t even think we need 64 or higher thousand networks, but maybe we do and we can investigate what that looks like and play around with that sort of horizontal scale.
Ala Shaabana (31:08):
Gotta love heap space problems.
What was that, Ola?
Ala Shaabana (31:14):
I was going to say, I love heap space problems. That was basically our biggest issue before. We ran out of heap space and I’m sure everybody recalls that fun time when the blockchain completely stopped producing blocks. Yeah.
Hey! Well, for the people who are new to the call, if you missed the announcement earlier, the Finney upgrade is planned for March 16th at 5 p.m. Eastern, if I’m not mistaken. For those of you who are mining, you’re aware of how fragile a miner can become if it goes offline. What is the transition from Nakamoto miners to Finney miners going to look like? What will miners have to do on their side? Is there going to be any sort of immunity period or registration difficulty change?
Jacob Steeves (32:06):
That’s a great question. We’re going to increase the immunity period, yes. That’ll help people. Secondly, we’re going to give people a fair amount of time. It’s going to happen on one day. We’ll announce more about the specific times coming up in the next week, but basically on a Thursday in the morning, we’re going to pull down Nakamoto. We’re going to tell you beforehand when that’s going to happen. It just means that we’re going to stop block production. You can still access the state of Nakamoto. We’re just going to turn off the validators.
Then we’re going to port. We’re basically going to deep copy the state from there to the new network in that period of time and then make sure everything’s working. Then eventually open up the firewall and begin a mission and also registrations on those networks within that day. It’s going to be clutch for us. This is why we’re taking the time as a team to make sure that we do this properly because there can’t be any flaws and there can’t be any mistakes in this process. We’re pretty confident about it now. We’ve done it a number of times.
Ala Shaabana (33:09):
Sorry, Const, can I add a little bit of what you’re saying?
Jacob Steeves (33:09):
Yeah, please. To give people an insight on how it’s going to work,
Ala Shaabana (33:12):
it’s three steps effectively. First is a snapshot. We’re taking a snapshot of Nakamoto. That’s everyone’s balances, everyone’s registrations, everyone’s UIDs, yada yada. Then we would stop Nakamoto. We would start up Fini. Then that import would happen right on boot. We would start up a new network. We would start up a new network. That import would happen right on boot Fini. As soon as we’re confident we’re twisting blocks, we’ve done our integrity testing, then we can move forward from there. Basically, part of the specifics that we’re going to announce as well is which block or what time we’re going to take the snapshot at.
After that point, it really won’t matter in terms of mining and stuff like that what you’re doing because we’ll likely be turning off the validators a few blocks after that. We’ll specify exactly which number block or which hash we’ll be picking it at. We’ll let you know. We’ll be taking the snapshot likely Thursday in the morning and then we’ll do the port and we’ll announce exactly what time it will be launching. I think, Mac, you specified it’s at 5 p.m. I’d like to everyone in the call, I wouldn’t take that as that’s the time. The exact time will be announced by us in the next few days. Well, I think Mac got locked up. Let me see if he has a request to speak. There’s some comments here before we go.
Jacob Steeves (34:32):
There’s some comments and questions in the comments here. One of them is about are we reducing the fees and the answer is yes. Hoorah. A lot. To be fair, Dick, I made a mistake.
Ala Shaabana (34:49):
And the fees are now $10 billion per transaction, just so you know. Oh, that’s some market inefficiency.
Jacob Steeves (34:56):
Mac, why can’t we hear you? I’m back now. I just got disconnected.
Okay. Oh, well, $10 billion. That’s no good. We can’t have that. Well, you’re going to have to mind some more, guys. We’re going to have to do that.
Ala Shaabana (35:13):
Well, Dick, you’re going to have to mind some more, guys. Oh, man.
All this hard work.
Jacob Steeves (35:20):
Mac, so what else do we have in store for tonight? And do we have any more questions?
Well, that pretty much wraps us up. I think we have a few minutes to take questions from the audience. If anybody has a question, please feel free to drop it in the chat.
Ala Shaabana (35:34):
Before we do that, there’s also a few questions on the slide that we posted. We can kind of go through a couple of those. What we’re going to do is we’re going to go through the most community voted questions. I think that’s the fairest, because there’s a lot of those, and it’s going to take a long time to go through them all. So I’ll start out with the first one. Somebody’s asking, how will you avoid validator centralization? I think they’re talking with respect to delegation.
Jacob Steeves (35:59):
How do we avoid validator centralization, is the question. Yeah, I think they’re talking because it’s going to be fewer validators.
Ala Shaabana (36:03):
Their concerns are going to be centralization there. Well, I think the question is, what do you mean by centralization here?
Jacob Steeves (36:09):
Some people have said you need a thousand unique parties to be considered decentralized. I think that there’s definitely less than that for a bit tensor. Because we’re going to have a hundred slots. From one perspective, but from another perspective, all of these validators are vying for delegated stake.
And they have constituents in that sense, thousands of potential constituents. So by opening the delegated stake direction with Finney, I think that we’ve increased decentralization. Because now these validators have many more people that they are responsible for. Many more delegators, people that are giving them TAO, that they’re going to have to answer to. Whereas right now, it’s not like that. And so I really think that Finney, even though we’ve likely reduced the number of absolute validators in the system, and that’s a computational reason for that, I think that we’ve definitely opened up an aspect of decentralization with the delegated staking.
We’re going to have to watch this play out. Building blockchains, we can really just guess to a certain degree. We can do analysis, but they’re like living creatures. They have their own lives. And it’s something that we’re going to have to tweak. And if we see that kind of centralization, we can attack that centralization. And eventually, there’ll be a lot more control for people in this call, probably, that are TAO holders, to make direct action to the protocol if that’s what’s needed to alleviate any centralization at the validator level.
Because at the end of the day, it’s about the decentralization of TAO that matters. Everything is tied to or tethered to who owns the tokens. So as long as that’s decentralized, I think we’re sitting pretty going forward. And we’ve taken a lot of action to make sure that’s the case. And one of the biggest ones we did was not do a pre-mine and ensure that there’s many holders here, many people that have a lot of play in the system, a lot of power.
Ala Shaabana (38:42):
Standard’s asking, will I have to stake my miners as I move over to FINI to build trust to prevent losing them, or will there be some sort of community period? I think we answered this already. If you’re already registered by the block that we take, you will be registered on FINI with a longer community period, I believe, if I’m not mistaken, and that will give you a little bit of time.
Jacob Steeves (39:03):
Yeah, and if you’re a validator, you just update your code and then just run the miner and it begins setting weights, it really should be that simple. I mean, that’s what we’re testing. That’s why it didn’t happen yesterday, and that’s why it’s happening in one week from now, is because we’re testing to make sure these things are really smooth for the people in the community. Also, please do test our test network. We have these branches that are available for people to give a go. So if there’s any surprises to a certain degree, you can make it so there aren’t any by following those branches and testing on the test network.
Absolutely. I second that. You’ve got seven days to test, y’all.
Ala Shaabana (39:51):
Go nuts, guys. Seriously, as hard as you can, basically, because the more bugs that you find, the better FINI is going to be on launch, because we need to make sure that we kind of hit everything that we maybe have missed. But let’s see how it goes. So I’m really excited. The instructions are on the announcements. We’ll also be updating the docs to make sure that y’all can see all the instructions, but basically release 3.7. We would basically just point at FINI and you’re good to go.
Many people have their tabs stored in the Polkadot extension wallet. Since you will not make use of Polkadot or FINI, we should remove our points from there. If it’s working on Nakamoto and it’s working for you, I don’t see a point why you should move anything, because it should be compliant, if I’m not mistaken.
Yeah, just because we aren’t merging with Polkadot at this time does not mean that your wallet needs to change. It’s all going to stay the same.
Ala Shaabana (40:45):
Yeah. It’s basically, if something’s working for you in Nakamoto from an infrastructure perspective, in terms of wallets, stuff like that, it should work just fine on FINI. There shouldn’t be any changes that way. When will be beta chat tensor ready for public testing? So the invites are coming out to beta slowly. They’re trickling out. And then we’ll be releasing a more public beta in the coming weeks.
Hi. Is the new website going to launch on the same day as FINI? No. Nope. Mr. Seeker, will this be a hard fork or a soft fork? We will need to do anything on our end. This is definitely a hard fork. Sorry, did you want to take this one, Constance?
Jacob Steeves (41:36):
I mean, it’s a one-word answer.
Ala Shaabana (41:37):
Yeah, it’s literally a hard fork. And as I explained earlier, the main reason for that is because we do need to have so many changes in FINI that they’re so intrinsic to the blockchain itself that a simple chain upgrade or a soft fork just will not work. It has to be a hard fork. On your end, you likely, if you are pointing basically at FINI, or if you’re pointing at Nakamoto, or you really need to just point at FINI at that point, we’ll be releasing Docker images for the new sub-tensor images. We’ll be releasing all of that. So it shouldn’t be that much of a change for you, but we’ll be releasing very specific instructions on how to, for example, move your Docker image over if you want to basically use a Docker image or use your own local sub-tensor. Or if you’re pointing at FINI directly, just point at FINI directly.
Oh, Allah, how big is our new Docker image?
Ala Shaabana (42:24):
So as a parachain, it was significantly smaller. As a full node right now that we’re doing is actually a little bit bigger. I think it’s a… I can’t remember to be honest with you, but 20 gigabytes, which is much smaller than 100 gigabytes we’re doing now. One of the exciting things about FINI with this solar chain that we’re doing as well is warp syncing, which is going to be basically a way for you to be able to use the snapshot of the blockchain to warp sync to where the current chain is. So you don’t really need to worry about Docker containers anymore. You don’t need to worry about all that kind of stuff. That is going to be one of the things that FINI is going to enable us to build. And that’s going to be coming up likely shortly after.
That’s wonderful to hear. I’ve spent many an hour syncing my Docker.
Ala Shaabana (43:10):
Yeah, I’ve come to be very tired of Docker because of this specifically. Will the UID registration buying mechanism, how do you prevent people from gaming or attacking the system and buying way too many registrations inside one APOC? Constant, I think you’re best to answer this question.
Jacob Steeves (43:31):
Well, it’s the same mechanism with the proof of work, right? You could have said the same thing about the POW. Why didn’t people just solve 10,000 proof of works within the block? And the answer is that it’s going to be adaptive and competitive. So the cost to get into the network is increasing if the number of people that pay to enter is higher than what our peg is.
That’s the first answer. The second answer is that we just have a limit. It’s like you can only register one miner per block. So that’s what we have. And that makes it so there’s only 100 registrations per 100 blocks and that’s enough time for the cost to increase. So if people do this, the cost will probably expand so much higher than the POW. And I guess that’s okay, right? From all the perspectives, this pushes the halving down the road.
It decreases the current supply. So I think it’s good for everybody, at least in the short term. And basically, there’s just limits on the chain to stop people from, say, wiping the network in a short period of time.
Ala Shaabana (44:45):
Will you disable the registrations from the first day or two when Finney goes live, both POW and the new mechanism?
Jacob Steeves (44:52):
Not two days, but I think one day.
Ala Shaabana (44:58):
Maybe 24 hours, just to give everybody in every time zone some chance. Is there anything in the Twitter that we need to answer to? Or if somebody has a question specifically, do we have a mechanism for people to raise their hands and kind of unmute themselves here, Mac?
I’m not sure. I think people can request to speak. I’m going through the chat real quick to see if you have any questions.
Ala Shaabana (45:24):
Okay, sounds good. I’m just looking through Twitter to see if there’s something. Let’s see.
Yes, delegation will be immediately available with Finney. Yes. Chat sensor will be on Finney on launch next week. I believe chat sensor is already on the network, if I’m not mistaken.
Ala Shaabana (45:51):
Will JS extension still work after Finney? Yes, it should. At least I’ve been playing with it, so it should work. Will the network fees be reduced? Yes, they will be much, much, much less. We’re aiming for the equivalent of a dollar or two, as opposed to right now. I think it’s rather expensive.
Audience Member #2 (46:15):
Hi, everyone. It might be a stupid question, but I just wanted to know if it’s not. Is that it? Can you hear me?
There are no stupid questions. I can hear you. Please go ahead.
Audience Member #2 (46:28):
Yeah, I can hear you too. The staking mechanism will be introduced. There’s no plans on changing the max supply of TAO, correct? It’s going to stay the same.
Ala Shaabana (46:38):
No, max supply of TAO remains the same. So total supply is going to be $21 million. That doesn’t change anything.
Audience Member #2 (46:43):
$21 million or $200 million? I couldn’t hear. $21 million. Yeah, yeah, yeah, yeah. Okay, perfect.
Ala Shaabana (46:50):
Yeah, I was scared first. I was scared first.
Audience Member #1 (46:56):
Hey, surprise! Okay, I can ask the real stupid question. Go ahead. Not at all. I found you guys through the meaning of TAO, all right? I need an explanation like I’m five. How do I get started?
I think I can take this one. Yeah, go for it. So if you want to get started as a server and you want to serve your machine learning model, there are a couple ways you can approach it. You can dive in and try to use all the public tools available to you and use the CLM model tuning script on our documentation to fine tune an open source model.
Or you can jump straight into the academics and start learning machine learning from scratch. There are a lot of resources you can find online. The Coursera course with Andrew and this really great, I’ve personally taken it. There’s usually classes at local universities you can take, but that’s what you need to get started as a server and you will also need compute, so GPUs.
Audience Member #1 (48:04):
Okay, so I have the server, I have GPUs, so I should go through your website to get started?
Ala Shaabana (48:16):
A lot of documentation written by our very own Mac.
Audience Member #1 (48:21):
I got you.
Audience Member #2 (48:23):
Is there any way I can use, I know the answer is no, I’m just saying why not ask again. Bitmain ASICs, or it’s like, not the same, it has to use GPUs.
Ala Shaabana (48:34):
ASIC miners basically use those? So I’m not a hardware guy, but my understanding of ASIC miners is they’re very different to the way that GPUs work. And GPUs are specifically built for matrix multiplications and that’s what makes them so good at machine learning. I’m not sure if you’re going to get the same output as you would with a good GPU compared to an ASIC miner. I think ASIC miners are very expensive right now, aren’t they?
Audience Member #2 (48:59):
No, I just have a ton that I have nothing to do with.
I can answer this question. It is possible to do machine learning on ASIC GPUs. It’s possible, but nobody has built this functionality to do it easily and at scale. You are going to need an entire team of hardware engineers to solve this problem.
Ala Shaabana (49:25):
The other thing is we are working with NVIDIA and CUDA for GPU optimizations. I’m not sure how ASIC miners work with that setup. We’re specifically working with the Torch library. I’m not sure if Torch is built for ASIC miners. What’s the question? Can they use ASIC miners to mine on BitTensor?
Jacob Steeves (49:47):
What do they mean by that? Like, can they make their own neuromorphic chip kind of thing?
Ala Shaabana (49:53):
Unfortunately, I can’t hear the audience. No, I don’t know why you can’t. Basically, they were asking if they had a boatload of ASIC Antminers, the Bitcoin miners.
Jacob Steeves (50:02):
Antminers are for the SHA-256 hashing, but we use SHA-2, SHA-3, multiple wraps. Maybe you can code that up with an ASIC miner. I’m not sure. You can look at what we do. But if we found out that somebody was using Antminers, we would just change our POW. So, do it secretly.
I have a quick question, if you don’t mind. Yeah, please.
Jacob Steeves (50:36):
You could do that, I think. I actually think you could use an Antminer that was…
Well, for registration, but machine learning is going to be a whole other…
Jacob Steeves (50:45):
Oh, yeah, yeah, yeah, useless. As soon as you get on the network, it’s useless, completely useless. Waste of money. Sorry, Akito, go ahead.
No worries. I just have a question. So, basically, when chat tensor is released, how is it going to improve over time? Is it just kind of like similar to chat GPT? The more people that use it, the more data? And then, is there going to be revenue from people using that? Like, if people are paying a subscription, or how would that work out?
Ala Shaabana (51:15):
So, Const might be better to speak on that one. I’m not sure if you heard him, Const, but basically…
Jacob Steeves (51:20):
I didn’t. I didn’t hear him.
Ala Shaabana (51:21):
So, he was asking, basically, if chat tensor would work in a reinforcement learning kind of way, where it would learn from people talking to it, and if it’s going to be on Fini, and if there’s going to be some kind of subscription model. So, subscription model, I can tell you, there’s no, at least not a traditional one, so you don’t have to kind of put in your credit card and use it kind of thing. But to gate, similar to anything we do on bit tensor, gating access to the network is TAO. So, basically, to be able to run your own chat tensor on Fini, you would need some TAO to actually get in, right? Through the registration or through, basically, in the network itself.
But as for the reinforcement learning bit, I think that’s more Const’s territory. Maybe you can speak more to that.
Jacob Steeves (52:04):
Well, yeah, a couple of things. I mean, the current implementation of chat tensor is email-based, and it’s beta-based, because we were really experimenting here, and it was a great success. But we are just pulling people in slowly as we build the front end and make the system a lot more polished.
We will be building an application like chat tensor that’s much more polished, and also way more integrated inside of the bit tensor ecosystem that’s TAO-based in access. Now, the economic model is that right now, with the embeddings network, chat tensor doesn’t work very well with it right now. That’s a bit of a longer experiment.
We’re going to be producing a sub-network that works with prompts, and at that point, you will be able to access the network in the standard way, like through the main protocol, which is if you hold TAO, you can access the system. And that’s just innate, right? That just works through bit tensor. It doesn’t matter if you use our front end or not. It’s just like there. And it’s censorship-resistant, and you don’t have to ask us if that’s what you’re asking.
Okay, so the censorship thing that you… That’s like V2.
Jacob Steeves (53:22):
V1 is the beta. V2 is updated front end and more people, and it’s much more open. And V3 is a sub-network where we’re seeing a lot more bandwidth to the chat tensor, and also it’s censorship-resistant. So don’t come to us to see if you can get beta. It doesn’t fucking matter who we are. It matters if you own TAO, if you are in the network.
Speaking of the censorship…
I think Akido has another question about censorship.
Yeah, it was just because I’ve seen basically the main concern right now with OpenAI.
Jacob Steeves (54:03):
I can’t hear them, unfortunately.
I’ll repeat the question to you. Go ahead, Akido.
It’s basically just about kind of the censorship and stuff, because a big issue with OpenAI right now is it’s very one-sided. As you can see, kind of like liberal, I guess you could say, so it limits a lot of stuff. I mean, some things that it limits, it’s like dangerous stuff, like how to make bombs or whatever that I can understand. But you can see it’s very one-sided, at least politically. So it’s just chat tensors are going to be like raw, zero-censorship, ask it anything.
Right, okay. So, Cons, what he’s asking is, is chat tensor going to censor its responses, and can UIDs on the network be censored?
Jacob Steeves (54:53):
Right, that’s a really good question. Because chat tensor, the application front-end that’s really nice to use, is going to be run by a front-end, which is going to be owned by a person, and likely a company who’s using their TAO to access the network. It’s their prerogative to censor.
And, you know, I might say that there’ll be some good censorship here, right? You know, it’s likely that some of the miners could be, you know, responding to some really atrocious stuff. So we’ll want a little bit, but it’s prerogative of the validator, of the validator which has the access. If you are a validator, if you have access to a validator, or if you’re staked to a validator, you’ll likely get the access that they provide, because you’re basically, they’re meddling you to the network.
If you’re a validator, you can just query the network directly, right, and everyone will respond. Sometimes people will respond anyways, because it’s very possible for you to host a model on BitTensor, and you can just accept queries from people without them being a validator, right? So there could be no censorship at all from that aspect.
For the miner side, censoring the miners, this comes down to, you know, really the core of what BitTensor is, because BitTensor is a mechanism for the validators to reach consensus on the outputs of the miners, and to agree on which ones they prefer. So if there is consensus amongst the validators, as to what should and should not be said on BitTensor, then that model will lose its incentive. Doesn’t mean that the axon has to go offline, doesn’t mean that the model is not going to, you know, be able to be served somewhere, it just means that it won’t be incentivized in through BitTensor, and that’s the nature of the consensus mechanism we built. Thankfully, that’s fairly decentralized, right, because there’s a number of players in there, it’s not just a single person that gets to decide, you know, what gets value and what doesn’t.
And that’s good, and that’s like the nature of decentralized consensus, you know, this decentralization of power, and therefore, you know, it’s harder to just censor ideas or speech that you disagree with because it affects you, right, you have to reach consensus amongst the other people that are also stakeholders in the system. So I can see a form of censorship arising at the level of the applications and the validator heads, and then, but only through consensus would it occur actually on the network. So the network will likely remain quite violent, if that word makes sense. I think that we’re going to like hope that we get a fair amount of censorship when it comes to images as well, and it will be the, you know, our front ends that do that type of censorship. At the end of the day, yeah.
Ala Shaabana (57:60):
I was going to say, maybe we should stay away from the word like violent, maybe let’s call it unfriendly.
Jacob Steeves (58:05):
Yeah, I mean, it’s like the network is sort of like not a safe space, if you think about it, because the censorship, but the safe space is the applications that the validator’s built on top, right, so people can choose the level of censorship they want by picking the validator heads that, you know, speak their language and are part of their culture, whatever that, you know, comes down to, but the protocol itself doesn’t censor anything. Like the miners can use the protocol, the wire protocol all they want, you know, it’s only the consensus that would actually push something out.
Audience Member #1 (58:40):
That’s awesome. I really like that. Thank you guys for that. And just to end one last thing, because I don’t think you guys mentioned, I was just wondering, would you guys consider or have you considered using L2 or Ethereum?
Jacob Steeves (58:51):
Yeah, I think this is, you know, really important right now, too.
Ala Shaabana (58:55):
Akito’s asking a question constantly.
Jacob Steeves (58:57):
Ala Shaabana (58:58):
He’s basically asking if we considered using L2 or Ethereum. Sorry, continue, Akito, what were you saying?
Yeah, so just L2, Ethereum, because it seems like it’s got quite a bit more advanced, it’s got a lot of native users. So just wondering why you guys didn’t go that route and chose like Polkadot.
Ala Shaabana (59:14):
Yeah, basically wondering why we went with Polkadot instead of, you know, some other L2 chain or Ethereum.
Jacob Steeves (59:19):
Well, the thing that really, really is good about Polkadot so far is substrate. Their substrate chains are fucking amazing. Excuse my language. They’re amazing. They’re really, really well designed software that makes it really easy to build an application on your own chain. So we picked that. We picked substrate initially and, you know, we’re currently negotiating with Polkadot, basically. That’s the purpose of this call.
Ala Shaabana (59:51):
Sorry, what you were saying, Const. One of the really cool things about the Polkadot chain as well, at least in my experience, because I’ve been on it, honestly, I’ve been just heads down here for the last few months, is the fact that it’s written in Rust. You know, I have kind of a love-hate relationship with Rust. Sometimes I hate it, some days I just cannot get enough of it.
But the close part about it is that as hard as it is to use, it is actually very, very well optimized for a lot of compute work that we’re doing. It just required a lot of finicking on our side to kind of figure out how that works, especially when we’re comparing chains and we’re working with a lot of time-critical stuff. But, effectively, we’re processing a whole boatload of responses from a whole boatload of models within 12 seconds. So, you know, in that sense, Rust actually makes a lot of sense to use.
Thank you guys so much. Thank you, Akhil.
Ala Shaabana (01:00:43):
Cool. I think we’re at the six o’clock mark. What do you think, Mac? Should we wrap it?
I think we can wrap it up. If anyone has any more questions that were not answered, feel free to at me in the Discord and I will answer to the best of my ability.
All right, guys. Thanks for coming. Have a great Thursday.
Jacob Steeves (01:01:08):
Thanks, everyone. Have a good night.
Disclaimer: The subtitles you are about to read has been generated through the use of artificial intelligence. Although we make every effort to ensure accuracy, there may be some minor errors and slight variations.
Bittensor is an open-source protocol that powers a scalable, globally-distributed, decentralized neural network. The system is designed to incentivize the production of artificial intelligence by training models within a distributed infrastructure and rewarding insight gained through data with a custom digital currency.
HASHTAGS: #BITTENSORTAO #BITTENSORMINING #BITTENSORCRYPTO #BITTENSORNETWORK #AI #artificialintelligence #neurIPS
Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.