Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.

Ala Shaabana (00:00):

Shall we get started, Const?

Jacob Steeves (00:02):

Hit it.

Ala Shaabana (00:04):

Awesome. First question is, when do you plan to increase the network UIDs and how many? If you do this too early, it could spiral down. People will give up seeing them receive lower rewards.

Jacob Steeves (00:15):

So what we’re going to do is we’re going to keep the main network at the 4096 so that we can get to a steady equilibrium right off the bat. However, our plan is to up that to 16,000 or 8,000 pretty quickly. What we’ll do is move probably through a subnetwork on 8,000 and then move the network from 8,000 into either switch that over to the main network.


We can hit 16,000 on this network. We’re pretty confident that we can. The question is just whether or not we’ll be able to sustain that from a consensus perspective, not from a computation one. So that’s really cool, and it will allow a lot more people to get into the network. What also will be possible is for us with subnetworks to work around things like having a registration subnetwork where there’s 16,000 UIDs in the registration network, and then we pull from that network into the main network.


But those plans are not fully formalized. What I’ll say for now is that just expect it to be 4,096 at the launch. We won’t be going less than that, and we’ll tell you guys in the new year when we’re going to go to 16,000 or higher than 4,000.


Yeah, read the Twitter account. Generally, the design for the Twitter account last year was notifications-based, so let’s get information out to the community. But I think that there’s been a push for let’s do more interactive content, let’s do more announcements for the AI industry in general, as it’s been going on for a long time. I think that that’s a really great idea, so we’ll definitely look into that.

Ala Shaabana (02:19):

When’s the next increase in TAO for validator? 2048 next.

Jacob Steeves (02:24):

This is the increase for the minimum amount of stake required to be considered for validator. We spoke about this in the last TJFT, how we’re moving away from using a lower limit to instead working around a limit on the number of validators.


And the reason why we want to do this is because if we can limit the number of validators, we can decrease the computational size of the epoch step function, so we can basically move up to something like 16,000. So if we say, hey, look, there’s only going to be 300 slots for validators, rather than saying it’s 2048 or higher, which leads to an issue where we have to continually adapt this number to ensure that people are not… There’s not too many validators in the network, and they’re getting immunity because there’s stake, which has led to a number of issues with the registration process in the last network. So we’re moving away entirely from this concept of min stake requirement, and it’s going to be based on the dividends of the users. So what that means is that it’s likely going to be a competitive value rather than an absolute value, and you’ll have to look at how that changes over time. So there won’t be something that we can just sit on with, let’s say, 2048, for instance.

Ala Shaabana (03:47):

So another question is, what methods do we have in place to ensure BitTensor stays out of the hands of the tech giants once it hits the DAO?

Jacob Steeves (03:55):

Yeah, I read this question, and it was very thought-provoking. It’s very interesting. It’s pretty vague. Like, what does it mean for staying out of the hands of the tech giants?


In the same way that, say, the mining network in Bitcoin or any POW system is really decentralized in the compute ownership, over those first years of, say, Bitcoin’s development, a large tech giant could have come in and totally controlled Bitcoin. I mean, potentially now nation states can’t even do that. And I think that it really comes down to the protection we have against centralization is the growth of the network itself. So how can we increase the value of the network? How can we increase the computation behind it? Those are very related. And the larger we are, the more difficult it is for a bigger fish to just simply eat us up. When it comes to the DAO itself, we don’t have any surefire plans for how the DAO is going to be organized. We alluded to something last year, I’m sorry, this year, but earlier in the year, about how we’re going to do voting. And we intend to have something that attempts to reduce just pure economic power in the DAO so people can’t just vote with economic power and is split across multiple individuals. That’s a difficult problem to solve and we haven’t solved it yet. But we are definitely looking at ways to stop just a large player from buying the network and we won’t release anything that is attackable from that naive direction. Any chance we can open source the code for the Bittensor playground?

Ala Shaabana (05:37):

I can actually answer this one if that’s the only concept. We will, however, it is incomplete. You can literally say in the URL it says beta, or actually And until we get to a point where it is actually code complete, we don’t want to just open source something that is incomplete. That’s namely almost a software engineering pledge that most engineers do because if we release incomplete code, we’re going to end up with a gigantic what is basically a gigantic repository of repositories of just garbage code. So let us finish that and we will actually open source this 100%, but it does need to be a little bit more finished and polished.


Can you tell us some interesting characters and institutions you met in NeurIPS and how do you BitTensor? I think that’s on your side, Jacob. The most interesting connection I think was from the Leon people. Maybe some of you are familiar. Leon is the team out of Germany that makes the large open source community run image dataset.

Jacob Steeves (06:42):

So we use the pile for text, but Leon is focused on images and they’ve produced this incredible dataset that has been the dataset underpinning OpenClip, for instance, which is better than even OpenAI’s clip. So I met the, you might say, founder of that project, although I don’t think he would describe himself as a founder. He’s just a schoolteacher from Germany who’s dedicated his life to open sourcing, mission intelligence, and he was very interested in what we were doing because I think there was a lot of alignment.


We talked at length about how incentives play with ownership in the system, and there’s definitely a longer conversation here about his perspective and philosophy about how AI should be governed, run, and expanded. He is anti-insentence. He is anti-incentives because he chose directly not to work with the companies that wanted to buy Leon or even hire him. So he had taken a very, I would say, fanatical approach to working with him.


So he chose directly not to work with the companies that wanted to buy Leon or even hire him. So he had taken a very, I would say, fanatical approach to working with incentives in that sense. And we are philosophically disaligned because we work with incentives naturally and directly. So I think that was a very interesting conversation, I might say.

Ala Shaabana (08:16):

Any plans for a DEX after getting a slot? I can actually take on this one.


So the parachain bidding, there’s a lot of tutorials online, you can actually read up on it, on how parachain slots work. But really it’s basically a weekly thing where once a week they will have an auction for a slot. The slot usually begins between a few weeks to a month or so after the auction ends. And let’s say BitTensor or whatever other blockchain wants, they might want the slot. So what they’ll do is they’ll literally bid to stake dots, some specific number of dots for the slot. So let’s say that we are blockchain X and we want to get this slot for this week. And basically if we win the slot for the week, we actually own that slot for two years. So it’s kind of a rental, you lease it for two years. And for those two years, that’s how it’s yours. And you’re actually naturally part of the Polkadot ecosystem when you win it. And you’re naturally part of the DEX. You don’t go after DEX after that, you’re already part of the Polkadot DEX in itself. And so on our side, I think there’s gonna be a couple more questions about the self-funding, so I just wanted to clarify that. On our side, we didn’t do a crowdloan, namely because of the volume of TAO. There’s not much TAO to give away. It’s very scarce, it’s very difficult to get, as everyone knows here. And so because of that, we didn’t incentivize it with a crowdloan. What we did was we spoke to our investors, we looked internally, and we found that due to the bear market, this is something we can actually take advantage of. And we can actually go after the slot because it’s cheaper nowadays. I believe Kon’s found it mistaken. I think the first slot during the bull market came out, what was it, a couple million, not even, like $50 million or something, or $40 million to get that slot. It was crazy. And now it’s just below a million. It’s not terrible to get. And we can get it with a sufficient amount of dots. So we’re working with our investors, we’re working internally, I’m actually gonna aim to go for it.


But the parachain slot, the crowdloan will be open regardless because our investors need to participate through that route. They can’t bid with us directly. That’s just not how it works. So if somebody in the community, which I will be posting, as I mentioned, the parachain slot auction live so we can all follow it, see how we’re doing, and see who we’re competing against. And if you feel like you have dots and you want to participate, that’d be amazing. And you can feel free to kind of participate and stake some dots. Now, if you stake them for the crowdloan, they’ll actually just be staked for two years, and they’ll come back to you after two years. We don’t take them from you. Neither does the Polkadot ecosystem take it from you. And just keep in mind that there is no incentive for it for this time. So we’re not going to give you back anything if you do stake help us out. It’s mostly, you know, a lot of thanks from us and a lot of internet hugs. But, you know, there’s no incentive this time to keep that in mind.


Dr. M asks, it will take some time to move servers to Finney. Will there be a grace period or a similar mechanism during the transition? So, yeah, go for it.

Jacob Steeves (11:12):

We will give you guys some more information on the specifics of this closer to the date. But there will be a requirement to fill those slots again. So if you’ve won them in the previous network, you won’t have them in the next one as currently sitting. Unless Shiv wants to extend that question.

Ala Shaabana (11:37):

Now that we’re in the head, what we would like to do ideally for your end as a miner, you should know the difference. The only real difference that’s going to happen is that it’s going to be parachain and blocks will probably start from zero at that point. And, you know, effectively, technically, you’ll be kind of starting over from a blocks perspective, but everything’s still the same, your accounts are the same, nothing changes. But as Kon said, the slots themselves are going to be up for grabs, so you might actually have to hustle to grab those. But otherwise, this is still kind of a work in progress. We’ll give you much more details as we obtain the slot and as we get more details as we move along. Is the team worried about the CIA killing people with too much TAO? Serious question.

Jacob Steeves (12:19):

Like you’re talking about the the maker TAO guy dying because of pedophile sex rings in in the Caribbean islands. I think that we’re sufficiently below the radar on anything like that, whether or not that’s that’s actually a thing that’s happening. No, we’re not worried about that.

Ala Shaabana (12:49):

No, as a brown person, I’m actually always terrified of CIA, so it’s not really a thing. Okay, early adopter here. I would like to ask the team to congratulate each other for amazing work you guys have done and accomplished. Thank you so much. Really appreciate it from you. It was mentioned that the bit tensor dataset will be grown soon. What goes into the decision of what is added? I vote for a bit tensor text collection.

Jacob Steeves (13:16):

Yes, really, it’s just our own process. So we’ve done this internally. It’s something that we can talk about with the community as we get to the stage, but we know what we’re going to add right now. It’s going to be the extension of probably the pile. And we do have a bit tensor text collection that we made almost two years ago that we’ll add in as well. But to answer your question specifically, what goes into the decision? We haven’t crossed that bridge because we haven’t needed to. Previously, it was just us and that’s how we imagine we’ll do it next time. Just us.

Ala Shaabana (14:05):

What can be expected timeline for the first subnet after launch of Fini? I’m not sure what you mean. The first subnet is going to be a part of it as it is, and that’s going to be the text subnet. So it’s not really going to change. That’s not how it works. So the timeline will be zero, I guess, in a sense, because it’ll be open right away.


When will the bit tensor dataset change? The idea of continually having it on the mountain seems unsustainable. Do we have to change this technology? Actually, it’s not that unsustainable, Barbarian, because the mountain itself is actually a scattered set of IPFS nodes. So if we do change it, it’s just going to change across the nodes and we’re kind of good to go from there. So whenever we add to it, it’s going to add that way. We can extend it. And the more that’s there, the better, really. The more that’s there, the better. And the more IPFS nodes there are, the better, because that’s going to add to the decentralization of our dataset and of the machine knowledge being generated here.

Jacob Steeves (15:16):

That’s similar to the question, will open tensor publish tool guides on how to build envelopes. It’s similar to the release of the playground, as this question is, and I think it’s a great idea. So we’ll just keep that in mind, but we didn’t have any plans for that.

Ala Shaabana (15:35):

Okay, I just had a chance to look through the chat. I think everyone’s kind of freaking out about mentioning that they might need to re-register. We should probably clarify this a little bit because it’s still a work in progress.

Jacob Steeves (15:45):

Yeah, so as far as we know, at this point, we don’t know if it’s possible for us to do a deep copy of the network as it is. The reason for this is because we’ve changed a lot of the storage maps and also we’ve changed the data structures themselves. So it would be very, very difficult for us to do just a deep copy on the miners. Everything will be changed with the exception of the stake. Well, the funds in people’s wallets.


There will be 4096 UIDs available and the difficulty will be very low. So it will be possible for people to get into the network easily in those, you know, it’s not going to jump to 41 trillion immediately like it was before, which is a steady state equilibrium for the demand and how many slots are available in the system. So from one perspective, yes, people are going to lose their UIDs. From another perspective, it’s an opportunity to get UIDs in what was before a pretty crowded network.


And also, like we said before, we’re going to be increasing the UID size. So we’ll have the ability to, registration will become less of an issue as we increase the number of slots. Alright, one moment Jacob, I’ve been having some issues here.

Ala Shaabana (17:07):

Okay, let’s see. Are we good to go or do you want to kind of elaborate on this, Jacob? Yeah, so that was the elaboration that I had.

Jacob Steeves (17:29):

So just to add a little bit, guys, as Const mentioned, this is a very, very difficult problem because the other tidbit here that we also need to worry about is say that we do save the network as is.

Ala Shaabana (17:36):

Say we do find a deep copy and it works out, which we are actually going after anyways. We’re trying to do this regardless. It still introduces more problems. It’s not a catch-all problem where you can just freeze the network and then move on to the next one. It doesn’t quite work that way because if you did do this, then you’ll also need to introduce a grace period. You’ll also need to introduce all kinds of sub complications to the registration mechanism so that it stays the same as people get in. Because as people get in, you will still get some registration.

Jacob Steeves (18:15):

One of the important differences that what we’re doing right now is when someone registers, we pull someone else out and that makes it very difficult to register keys because there’s highly competitive quality to getting those slots. In the new network, when there’s 4,000 UIDs, you’ll be able to get a slot and then you’ll be able to maintain it because we haven’t hit the 4,000 UID limit.


We’re getting a lot of feedback on this, so I think it’s probably something that me and Shiv and the engineers need to sit down with and come up with a plan. Perhaps there’s a way for us to do this in a much more elegant way that doesn’t disrupt the current state of people’s UID slots. Let’s pin this one and we’ll come back as soon as we can.

Ala Shaabana (19:06):

We heard you guys. We’ll dig into this. Considering the evolution so far, how would you describe a timeline towards support for new machine learning demands on BitTensor? Audio, image related. It might be potentially a newer person. The subnetworks that we’ve been describing in this call, and we’ve kind of described this a few times, is going to be the thing that’s going to help us towards support.


The idea behind it is, as we have different subnetworks, each subnetwork of the FINI network that we’re launching next year is going to be supporting different domains. For example, we can have a subnetwork for images, a subnetwork for audio, a subnetwork for text and so on. Eventually, this is going to pave the way towards multimodal models. Those are the models that are able to do image and audio, for example, or audio and text and so on and so forth. Will BitTensor open publish tools or guides on how to build on validators?


How much TAO will be needed at current supply to meaningfully shift the incentive towards a custom dataset enough for some miners to tune models to it?

Jacob Steeves (20:23):

Oh, very good question. Very good question. You can shift the incentive towards a dataset with one TAO. Meaningful here is ambiguous. Can you diverge the network to work on Chinese so that everyone needs to understand Chinese? Yes, totally.


It’s very difficult for us to say how meaningful a shift would be based on a number for TAO. I don’t think we can. But we can say that having more TAO attached to a particular dataset would create that shift.


Generally, we don’t want people to do this. If someone wants to work on a custom dataset, we hope that we can use the subnetwork mechanism to create a subnetwork that works on that dataset. So there’s a consensus amongst the validators on it. If the anonymous person who asked this question would like to propose another dataset, perhaps they can come to the Discord and do that and we can build that network for them.

Ala Shaabana (21:37):

Can you explain the query miners for intelligence in practice? Could I already build an interface which is a bit tenser as its engine? There’s a few videos on our YouTube channel that answer this question rather well.

Jacob Steeves (21:49):

I don’t mind answering this question.

Ala Shaabana (21:52):

There’s quite a few to go though. It just has heads up. This is kind of a whole lecture on its own.

Jacob Steeves (21:56):

Yeah, we’ve got time. We have 30 minutes. Could you explain how validators actually query miners for intelligence in practice? Could I already build an interface which uses bit tenser as its engine? The answer to the second question is yes. That’s what we do at the Foundation. We haven’t released any guidelines for doing that effectively. Because we are still crafting our best practices for extracting knowledge from the network. We have some architectures which we think people should be using. Different styles of mixtures of experts. Did you delete the question there, Alan?

Ala Shaabana (22:36):

No, I didn’t delete it. It just moved out.

Jacob Steeves (22:38):

Oh, okay. It moved down. Oh yeah, sorry. It’s moving down. So that’s the answer to the second question. There’s two questions in there. The first is that it’s less of an interface and it’s more of a technique. The interface that we built with the playground was a direct interaction with bit tenser. So we queried the miners and that’s quite easy to do. If you can just learn our API, that’s pretty easy. If you want to use the JavaScript backend, that’s totally fine as well.


All of those are fairly easy if you’re a developer if you want to build a tool on top of it. How could you, and then you could say, could you explain how there’s actually query miners for intelligence practice? So in practice, the validators, what they do is they pull data from the mountain dataset and then they send this to the network and get a bunch of responses and they use this to evaluate the performance of those miners.


The miners are required to produce an understanding of those sentences that allow for doing generation and also the union of those, the embeddings of those sentences for doing prediction on an NTP task. So that’s definitely involved, like Alice suggested, but I hope that someone can ask that again in the general and we can dive a later date.

Ala Shaabana (23:58):

If you’re not aware, the person that says anonymous owner should ask this question, but if you’re aware, we do have a YouTube channel that’s full of videos. We do have a video to explain this exactly with infographics and everything. Should be rather helpful that way. Yeah. Dope. Could you please explain subnets? Thank you.

Jacob Steeves (24:20):

Great. So Nakamoto is a network of UIDs, means that there’s slots. These UIDs refer to individual slots that a node or an endpoint, which is just a server on the tensor can take.


That’s what composes a network, a set of these UIDs. Subnetworks just means the ability for the chain to create multiples of those networks and actually interact between them so that we can have different consensus mechanisms at play in the tensor and have different hyperparameters in play. And also that we can work on different modalities of machine learning concurrently, but all within the same incentive structure and on the same chain. Yeah, that’s it for that one.

Ala Shaabana (25:07):

How does getting the parachain slot fit into the BitTensor long-term roadmap? Besides DEX, what new types of things does it enable BitTensor to do? Yeah. Do you want to take this one?

Jacob Steeves (25:18):

Yeah, sure. So the value of Polkadot is that we get interchain connectivity. That’s number one. So as you mentioned, the DEX. But what it also provides is the ability for us to borrow the proof of work security, and actually in the case of Polkadot, it’s a proof of stake security. Proof of stake security of their collator network and their validator network. So this is very useful if we want to ensure that the chain is not run by a small group of individuals and that can therefore be attacked with that single point of failure or that small group’s point of failure.


That’s the blockchain requirement if we want to have decentralization and we want to run the DAO and the BitTensor incentive mechanism in a decentralized way and therefore allow cross-chain access and movement of funds, etc. So without the potential for somebody to interfere with what we’re doing, whether it be an organization or a government.

Ala Shaabana (26:26):

It’s taking so long to increase div slash trust even on highly fine-tuned models. Will this be fixed soon?

Jacob Steeves (26:34):

It’s taking so long to increase div trust even on highly fine-tuned models. Will this be fixed soon?


We’re fine-tuning this. It’s something that works that doesn’t work as well as it could, and we’re fine-tuning it. There’s many things at play here, and I think many of the people in the community are very aware of what the relationship and the trade-offs are with how fast divs increase or how fast trusts increase. Because what we’re looking at here is the length of time it takes for a validator to reset their weights and the error bars in that evaluation. And if the error bars are too high, then what we see is people pulled off the network that shouldn’t have been pulled off and that disrupts the network.


And on the other side, if we don’t move fast enough, then obviously people can’t get in because it’s taking too slow, too long in this case, to increase the divs or trust that’s required even for highly fine-tuned models. It’s nice that we hear this kind of complaint from the community because it’s something that we can then look into and say, okay, great, are these models that we think should be performing well, are they performing well? What is our analysis on that? Long story short, it’s not something to be fixed, it’s something to be optimized, and we’re continually working on it.

Ala Shaabana (27:58):

Could you publish more info on the vision for a bit tensor in terms of what we will scale into, how some of this will create value, future applications, et cetera? So we actually did this last year, around January I believe it was, where we published a roadmap of where we were going to end up. We did mention to the community that the launching into a DEX or being able to liquidate your TAO is going to be the holy grail of the approach this year, which is kind of what we’ve done.


And that’s kind of this approach that we’re still going after. Next year we’ll be doing the same thing. We’ll be publishing another roadmap for the year. We will highlight subnetworks because they’re going to be launched at that point, future applications and everything, around January. So stay tuned, it’s coming.

Jacob Steeves (28:40):

Yeah, before you put this down here, Ella, I think the question is more about what the vision is for bit tensor in scale and how subnetworks will create value. We think that what we have in spades is scale in terms of compute to build foundation models, which other teams don’t have because we have this highly incentivized compute layer.


So that’s the vision for bit tensor, leveraging incentivization to scale. And subnetworks just allow us to work on different modalities so that we can work with inside, say, image or the relationship between images and text. That will push us in the multi-modality world, allowing us to compete on the same footing as other companies that are also hitting those modalities.

Ala Shaabana (29:41):

Is there a FINI launch ETA? Sorry if it’s asked already. Is it tied and or related to the parachain slot? We’re aiming for January 10th for FINI launch. The parachain slot is what’s going to enable us to basically launch FINI because that’s going to be the methodology to allow everybody to be able to be part of the parachain and participate in the DEX and everything like that. So it is tied to the parachain heavily. How do you get the network to learn, i.e. as a validator, can you just query the network for images or anything new? What is the process through which it learns? I learned the first question you got earlier.

Jacob Steeves (30:21):

It’s a great question. How do you get the network to learn? The network is incentivized to be serving information that has been learned. So we don’t directly ask the network to learn unless we ask gradients, which are currently turned off. I’m just going to mute this guy. So unless gradients are turned off, which they are not. So we build the incentive mechanism above the commodity we want to be trained for, and then we ask the network to effectively do that.


So the network learns exterior to BitTensor. As people have mentioned in this call, how do you fine-tune? Why is my fine-tuning model not performing as well? Fine-tuning is the mechanism of learning or pulling a pre-trained model and then loading it into the network. Yeah, thanks, Salah.

Ala Shaabana (31:24):

What have risen since the latest update? Is there a technical reason behind this?

Jacob Steeves (31:28):

I think that a few of the validators, the larger validators, just simply went down and so their divs fell off and others went up.

Ala Shaabana (31:37):

Basically this. How is the branding looking so far? Any clues when the work will be visible?

Jacob Steeves (31:44):

Some of it is visible already. The new logo was part of the branding exercise. The colors of the Discord is the branding exercise, but the major presentation will be in the website release coming in February.

Ala Shaabana (32:01):

And how will the Finney updates improve the BitTensor network?

Jacob Steeves (32:05):

I think we’ve spoken to that a number of times. Yeah.

Ala Shaabana (32:09):

Onward. BlackRock asks, BlackRock, interested in events and is it possible to speak in general to sales? Thanks, BlackRock. Appreciate it. Feel free to reach out. To the PR team for, oh, this is the continuation of that one. Where’s your channel getting answered? Thank you so much. Really, really appreciate it. Anonymous. As a validator, can you just start creating the network with new types of outputs and what does that look like from the miner’s perspective?

Jacob Steeves (32:39):

Can you just start creating the network with new types of outputs? Yes. What does it look like from the miner’s perspective? The miners are learning how to produce representations of language or text inputs in general based on the limitations of the tokenizer. That’s the only limitation. That’s the state space is what the tokenizer can produce from natural language. So if you can create a sequence of text in any language that works with our tokenizer, you can query the network and ask the network what it thinks of that particular input.


So what does it look like from the miner’s perspective? They receive a tokenized sequence of text and then they run their node, their miner, their model to produce those representations and the logits as well, depending on what’s being asked of them. But both of those are the outputs of the machine learning model often that are related to the sentence that was queried to them.

Ala Shaabana (33:49):

Three. Truly amazing. I can’t wait to see everyone’s full reach potential. Thank you, Novostipi. That was a long one. Really enjoyed it. How does BitTensor algorithm regard ensembling methods or is that up to the miner to build into a UI UX?

Jacob Steeves (34:04):

Yes. So that’s a really good question. Ensemble methods are basically techniques where multiple models are joined together to produce a model which is better than any individual. So the sum of the parts and mixture models are effectively smart ensembles.


BitTensor is designed as a mixture model. It’s designed to work as a mixture model. It’s designed to be queried as a mixture model. The presentations that we’ve done over the last year with things like alpha, for instance, alpha playground, we’re not using mixtures at all. We were simply just querying the endpoints to showcase that they were producing something of value. In the future, BitTensor will be queried through mixture models so that the miners themselves are being, the value they’re creating is being extracted in the same way that the validators are incentivizing it. Which is by joining the nodes together into a model and hosting that somewhere or training it somewhere else.

Ala Shaabana (35:11):

Can you talk about the two supercomputers dedicated to BitTensor?

Jacob Steeves (35:16):

Do you know what those two supercomputers are?

Ala Shaabana (35:18):

Karo, did you ask this question yourself?

Jacob Steeves (35:21):

It’s me. It wasn’t me.

Ala Shaabana (35:25):

But I think you’re kind of on that side, aren’t you?

Jacob Steeves (35:28):

She even got you? Yeah. I don’t have anything to say right now.

Ala Shaabana (35:33):

Okay, there you go. We won’t say anything. Stay tuned.

Jacob Steeves (35:36):

Stay tuned, yeah.

Ala Shaabana (35:40):

How far away is the network from being able to process the same level as JAT GPT or run something of similar ability?

Jacob Steeves (35:46):

Ooh, no comment, but more to come.

Ala Shaabana (35:51):

We’re working on it, guys. This is fun. Yes, it is. It is indeed. That’s so much fun. Thank you so much. How does the BitTensor, does that answer this question?

Jacob Steeves (36:06):

Any updates on the SubTensor fallover flag? Yes, yes, yes. This one’s come up a number of times. The PR has really fallen into the waste bin because we’ve been heads down with the FINI network and not had time for upgrades, really, on the core SubTensor. But it’s there that the PR is actually on our GitHub. If someone in the community who knows how to code wants to take that one on and save Credo from asking this question again next to JFT, we would be much appreciated. You’ll get a lot of internet hugs.

Ala Shaabana (36:41):

If there would be a second text network, what would be the niche?

Jacob Steeves (36:50):

Yeah, really good question. I think the niche might be in the validation technique. We’ve talked about this internally, about moving or trying different forms of validation techniques. I think that another one would be working on a specific form or modality of language, maybe say code generation, rather than just the grand corpus of textual information that we have in the Mountain dataset.


But we don’t know anything beyond that, really. We haven’t really explored or envisioned anything beyond those two divergences, as far as I know.

Ala Shaabana (37:43):

It best to refrain from giving specific dates on milestones going forward. Instead, stick to quarters to avoid disappointments, FINI, Q1, etc. Thanks for your advice. It’s actually good advice. I think the January 10th date, though, it’s not arbitrary. It’s a date that means a lot to us. Right, Constantin? It was the date Konagi first came up.

Jacob Steeves (38:03):


Ala Shaabana (38:04):

No, it’s not the original bit tensor. That’s not it.

Jacob Steeves (38:07):

No, it’s based on the first Bitcoin transaction. Oh, that’s right. Yeah. What is the name of your YouTube channel with the videos? The one I can find only has two videos.

Ala Shaabana (38:13):

Oh, that’s right.

Jacob Steeves (38:21):

We have that link somewhere. Could someone please post that in? Maybe if Mac is on the call, could you please post that in the channel? Or someone from the Openensor team? Post our video channel? This one’s a joke.

Ala Shaabana (38:43):

It happens every time. Every time you ask this, somebody asks this question.

Jacob Steeves (38:46):

Yeah, yeah, yeah. I’ll ask the devs what they can do. You mentioned that a developer could already build an API which utilizes potential as an engine. Do you have any plans to make it less technical technical ability dependent? Yes. Over time, this is something that we’re working on continually to make the for instance, the torch API a lot easier to use. If you are a developer yourself, I wouldn’t be so bearish on your ability to figure out what we’ve done. Especially if it’s since it’s Python, it’s fairly easy.


As it as it goes for the JavaScript implementation, effectively what the front end had done is run a flask server, which is Python connected to the bit tensor API, and then build a JavaScript. Sorry, I built a Python backend and then connected to it with the JavaScript front end. So if you’re looking for the architecture of how we build the playground, that’s that’s what we did.

Ala Shaabana (39:51):

The developer could already build an API which utilizes bit tensor as engine. Do you have plans to make it less? I already answered this question. Where can I learn to connect to the bit tensor API?

Jacob Steeves (40:02):

Through our Python GitHub. So if you go on our GitHub and go to bit tensor, open tensor, open tensor, bit tensor, you’ll be able to see what we’ve written there and you should be able to connect with it. It’s basically the same. Well, it is the same technology that you install when you run miners. So everything is bundled with inside bit tensor.

Ala Shaabana (40:24):

The bit tensor settings on YouTube are weird and you have to look at the list of all the videos. I’ll look into this and fix it today if I can. The last CGIFT wasn’t uploaded to the YouTube channel. Could we take care of it? I believe that was Taquelin. She recorded the last CGIFT if I’m not mistaken, or was there one when I wasn’t around?

Jacob Steeves (40:43):

It might have not been recorded. Yeah, as Mike’s pointed out, it wasn’t recorded. It was less of a presentation and more of just a community talk. I don’t think there was. There you go. So Mac has actually posted the YouTube channel if people are looking for it here. Thank you, Mac. Is the loss function for the network with the validators? Yes. If so, what would stop someone from sending data through this function that wasn’t coming from the model? This is absolutely possible, and it’s designed this way so that people can build any type of model they want. So our servers are agnostic to Hugging Face. They’re agnostic to the miners that we built into the bit tensor Python API.


The bit tensor Python API was built so that people could easily build their own neurons. We built the API for ourselves, and then we released that obviously open source with the rest of the code.


The goal for bit tensor is building incentive mechanisms on top of arbitrary function agnostic machine learning models so that we incentivize the production of this digital commodity in a way that’s very valuable to our clients and ourselves. So that hopefully answers the question.

Ala Shaabana (42:12):

A high level overview of TAO counting for a beginner. Yeah, supply for mining demand from burn staking. Can I spend TAO for training?

Jacob Steeves (42:25):

Right. Explain on time five. Your expensive Montessori preschool costs money to enter. And why do you go? Because you want to be around the smartest individuals so you can be yourself intelligent. If there’s any five year olds on the call, let us let us know if that made sense.

Ala Shaabana (42:49):

I think that was the deal. Someone couldn’t someone literally just query the mountain and get zero loss.

Jacob Steeves (42:54):

The this was I think this is a very pointed question because the person who was answering this might be the the the developer of the. Yeah, so basically the what that was what happened this week someone was was using what’s called a retro attack, which is something we knew about for a very long time. And the the we simply did not have that defense in place. So when we saw this attack happen, we knew what we needed to do. You can see how we solve that in the latest validator update. If you’re curious.

Ala Shaabana (43:33):

Is the loss function for the network with validators? If so, what would stop someone from sending data through this function that wasn’t coming from the model?

Jacob Steeves (43:40):

That was already answered.

Ala Shaabana (43:44):

How would open tensor deal with validators using illegal slash highly questionable data sets making their own subnets around it?

Jacob Steeves (43:49):

Yes, this is the fundamental problem in bit tensor around cabal creation. So the divergence between the the subnetworks, which we we outline at length in the white paper. The gist of it is that if you diverge from the other validators, this is costly because the consensus. Group the 51% is where the majority of the inflation is accrued and so miners are willing and able to to move the data set or move their subnet around that. But the cost is to their dividends.

Ala Shaabana (44:28):

So they expect no loss. That’s fine.

Jacob Steeves (44:30):

Ship ship. I get it.

Ala Shaabana (44:34):

Can we use the same keys for rear edge or do they need to be different keys?

Jacob Steeves (44:39):

You can use the same keys. I think this is referring to the to the next network. So we we’ve heard the community loud and clear on this. We’re going to take we’re going to take a much deeper look. And thank you everyone for for for pushing that one. Obviously, there was a lot of, you know, negative response to to our choice, which tells us that it’s something we need to, you know, rehash and re understand and do our best to alleviate the pains that we might be causing by that change. So Thank you everyone for all of your questions. Very valuable conversation as usual. Hopefully, if there was more resolution that people needed on these any of these questions, feel free to ask them again in the general channel and obviously at sign me or ship or any of the other developers or engineers in the project. So Thank you all. Thank you all. We’re going to stop here. So we’ll keep the slide open and and we’ll we’ll come back in two weeks around the time of the release. So that’s gonna be very exciting. Merry Christmas. Happy Festivus and a Happy New Year, guys.

Video Description

Subscribe to the The Bittensor Hub for everything Bittensor!

Bittensor is an open-source protocol that powers a scalable, globally-distributed, decentralized neural network. The system is designed to incentivize the production of artificial intelligence by training models within a distributed infrastructure and rewarding insight gained through data with a custom digital currency.

Discord: Website: Whitepaper: Network Map:

Socials: Bittensor: Jacob Steeves (Const): Ala Shaabana: On The Chain:




Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.