Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.

Carro (00:02):

Howdy, howdy. Can everyone hear me? Yes. Wonderful. How’s, how’s everyone doing today? Great. I hope I just realized that most people can’t respond to me in a Twitter space.

Jacob Steeves (00:21):

Unless that was leveled at me, Robert.

Carro (00:23):

I, no, no, no. It’s, yeah. It’s great to, you know, to be here. It looks like a lot of people are starting to really fill in here right now. I just want to say thank you. Thank you everyone for, for being here today. You know, it’s really exciting. We’ve been working really hard trying to, to build all the stuff that we’re going to be talking about today. And so, and, and we’re working on, on some, some, some quick little surprise that’s only going to be available during this call. So by, by being here and participating in this call, you’re going to be able to access some of the cutting edge that BEDtensor has to offer.


But I think that before we go there, I really think that we should, I really, I really like to start talking about why we started building this in the first place. And so this goes all the way back to, you know, one of the first, I think the first week of December, we were at the neural IPS conference or neural information processing systems conference.


And we met a lot of really great intelligent people there and you know, got to, got to see meet people that were working on similar things to us. And this was around the same time on that Tuesday chat GPT dropped. And so of course, you know, we were talking about it at the conference and you know, some of the, some RL people and I were discussing, you know, this probably wouldn’t be all that hard to reimplement and make our own version of like an open source version.


And so little did I know that, that there’s a lot more, it’s a lot more involved than you initially realize. And so we were able to, you know, make a model that was pretty good at being aligned within the first couple of weeks. And, you know, obviously I, you know, this was a research project, just a re-implementation and we, you know, I showed it to some, some community members and they were really enthused and jazzed about it. And so, you know, we, we tried to make it better and better and really design out some of the principles that are going to be a part of what BitTensor is going forward.


And so it might, you know, it took a little bit longer than we initially wanted it to. But I think that where we are today is now we’ve, we have at a interface to this, these types of prompting models. And so because we have just released Finney, which enables us to have sub-networks and delegated validators and delegated staking. It is, we, you know, this next step that we are taking is going to enable incredible businesses and applications to be built on top of BitTensor. So Finney was that first step, right? ChatTensor was a research project, really trying to figure out what’s the best practices and, you know, share that with the greater community.


And with Finney we’ve enabled delegated staking and delegated validators for businesses to come online. And we’ve seen some, some businesses come online, like what Mog Machine has been working on, what Mr. Seeker has been working on, what AI Zorro has been working on, you know, Tau Station. There’s so many, and RunPod, I mean, there’s so many, it would be hard to list them here. I don’t want to leave anyone out, but it’s been incredible to see this, this growth come online. And so ChatTensor, the idea is that anybody who has a, who’s operating a delegated validator is going to be able to spin up their own validator, spin up a WebSocket and REST API, and then be able to gate access to the BitTensor network, whether it’s through Stripe API integrations where you’re taking, you know, like FedCoin or, you know, United States dollars or, you know, British pounds or so on and so forth.


You can, you can, you know, take, accept crypto, however you want to set, however you want to take payments for the value that you’re providing there, which is providing access to the marketplace of machine intelligence, you know, is your prerogative as a delegated validator. And so ChatTensor is a demonstration of that, but we really want to take it one step further and we want to make this go interplanetary. So, you know, we have with the Nakamoto sub network, the sub network three, a excellent mixtures of expert next token prediction model that you can get embeddings from. You know, there’s all these different types of synapses that you can get. And what we started working on today, and not today, but what we, what we’ve been working on and something we want to demonstrate today is what would ChatGPT or what would a prompting model like that where, where instead of it being predict the next token, it’s, you know, user assistant, it’s, you know, in a messaging format, what type of performance and what type of network and what type of models could we include in the network like this? And so for, for the remainder of this call, we want to demonstrate what this prompting network can offer and what a delegated validator is going to be able to offer. And what I like to think of as like a mini commerce district inside of the bit sensor network. So I’m going to tweet out. I’m going to tweet this link out really quick. This link is going to come down when we’re finished and you do not have to delegate to be able to access this demo. All you need to do is log in with your polka dot JS wallet, your talisman on your desktop. We also offer support on mobile with the Nova wallets. Um, and so,

Jacob Steeves (07:22):

Hey Rob, can I ask a question about to give, to give the audience some more context? Can you explain exactly what makes this different from chat tensor that we saw last week? Um, and what’s so exciting about what we’ve done here for the demo so people can, can understand.

Carro (07:38):

Yes. So absolutely. So, um, so right now in the bit sensor network on sub network three, you have to, you know, that is in the like logics or the tokens. It’s in a numerical representation, that knowledge. But with the prompting network, what we really wanted to do was validate just straight up text. So there’s no tokenization involved or no, no de-tokenization, no, uh, applying, you know, forward passes to the logic process or anything like this. Right. Um, we just wanted to evaluate texts.


And so we utilizing the research that we did with chat tensor, we are using a reward model to validate different, uh, API APIs on the network, um, and, and different types of models. So theoretically, um, a human could be a member of this network. Um, they might not be able to respond fast enough, but, uh, based off of prompts, a human could, you know, type in the response or, uh, uh, you know, any type of language model, any type. So, you know, we’re talking about like a cohere API. If you plug in the cohere API, that’s going to work on bits on the, on the bit tensor prompting network.


If you plug in goose API or AI or Lang chain or anything, anything that is a language model that produces text can be validated and earn rewards in the marketplace. Uh, that is bit tensor. That’s what we are building right now. And that’s what we’re going to be demonstrating the prowess of, of this, uh, of a, of a, of a sub network like this, a part of, if I, yeah, go ahead.

Jacob Steeves (09:26):

Rob, if I may add some color to what we have here with, with the demo is really an expansion of what we had last week with chat tensor. So Robert did an amazing job of producing a language model, um, chat tensor. Uh, we trained it in house. It’s our own model. Uh, we needed to do that to learn how these things worked really well. And, and obviously Kurt did an incredible job of that. And what you guys have been playing around with was a model that was hosted on bit tensor, but there was no incentive mechanism for chat specific language. Yes. We had built bit tensor to be very general to work with unsupervised language models and that produced an output which was very abstract and a little bit difficult for most people to understand and apply, uh, in an application, sort of more on the side of the ML ops rather than the client facing applications. But what we’ve built here with the prompting network is an entire network where the outputs of these models are very interpretable.


For instance, what is the capital of Texas will return Austin. And, and that that’s what the miners are literally responding. So what Robert was saying is that you could literally sit behind this endpoint and answer these questions directly. We don’t care. We’re agnostic to the way in which this information is produced. The demo that we’re going to post in the chat below is literally connecting to a test network where this is running. So you’re querying chat tensor, but chat tensor is talking to right now it’s about 10 models. Um, and selecting the outputs of those models as the, the, uh, prompt response. What’s really cool here is that we have not just one model, but a whole market of models that can plug themselves into this front end and attempt to maximize the reward. So we hope that this will drive down the price, um, and also improve and drive out the diversity of what we’re seeing in a chat front end. There’s, there’s really, really, the sky’s the limit with what you can build. And this is just a demo we built in one day. So, um, you know, hopefully people, you know, like it. So I mean, if that’s the color of ad drop past them like that.

Carro (11:39):

No, you excellent. Great. Uh, you know, nice, nice bounce pass. You know what I mean? Like, um, uh, I just tweeted out the demo link. Um, all you need to do is log in. If you’re on desktop, log in with your talisman, your polka dot JS. If you’re on mobile, we just started supporting mobile on both this demo and regular chat tensor on using the noble wallet.


Um, we are, we’re going to be appending to the FAQ, uh, very shortly. Uh, we, you know, we just added this functionality maybe like an hour and a half ago. Um, so, uh, you know, feel free to, to throw some of your hardest problems at it. Um, you know, this is, this is all running on a test network on bit tensor right now. And our, you know, I think Jake said it really well. We are model agnostic. You know, if you are producing intelligence, um, that, that intelligence is encompassed inside the marketplace that is bit tensor. So if you are limited, if you are limited by, by say a specific API provider to a particular quota that you can use with bit tensor, because we are, we have not only just redundancy, but programmability built directly into these models.


Um, the, if you are, if you want to build an application and doing that with, uh, either having your own delegated validator or, you know, having an agreement with a delegated validator, either through a subscription service or whether, you know, just delegating that towel or, uh, you know, paying in crypto, you can build your application on top of bit tensor off this prompting network specifically, you know, you can think of the prompting network as a commerce district inside of bit tensor, and you can build your application and not have to worry about, you know, your, the, the API going down because we have, you know, right now we only have a few minors, you feel you IDs in that, that network. But once we open this up, uh, here towards the end of April, um, once we open that up, uh, you know, we’re going to have hundreds of UIDs all running their own variations of chat tensor, whether it’s self-serve or, or self-hosted, or, uh, you know, you’re, you’re, you have, you’re using a blend of different APIs or, you know, whatever, whatever your exact broad prerogative is, um, you know, that’s going to be driven by the incentive of the bit tensor network.


Um, so, uh, you know, uh, there was a, somebody I follow on Twitter, um, uh, CT Lewis, um, you know, he, uh, they, they are operating GT lab or GPT labs, right. Um, I’ve seen, you know, that that’s a great example of someone who’s, who’s had struggles with, um, in his case, it was the open AI API, um, where they just had so much demand, but because they had to request access, uh, for more, you know, quota, their application broke with something like bit tensor and hundreds of UIDs. This is not something that, uh, we have that redundancy and that ability in there such that, uh, you know, you’re, you’re not going to run into, uh, that issue.

Jacob Steeves (15:15):

And this is really interesting, um, point you’re making Robert. And when we first set out to build bit tensor, um, we thought, you know, the incentive mechanism is going to drive people to open up their compute and their intelligence, um, and their data into the network. And that’s going to be the killer app. We’re going to have the most amount of compute. We’re going to have the most amount of data and we have the most, um, you know, high quality models. And indeed, that is something that we drove over the last year. And it’s been amazing to see, but there was a, there was another dimension that we didn’t consider when we were building bit tensor at first, which was the censorship resistance. And we, because we’re agnostic to what the miners are doing. And because there’s this single entry point into this neural internet of event points, you can’t censor that entry point. Um, there’s just too many to censor. So there’s a large conversation right now, going on in the AI industry about this memorandum from Elon Musk and Yoshua Bengio and, uh, the less wrong guy about, about censoring and stopping machine learning, or, or, you know, if you sign up for an API with a lot of these projects, often they say, here’s your quota and who are you and, and what are you using it for? We have AI that’s gated and this extra dimension that I’m talking about, which I didn’t foresee was the fact that we’re building a gate lists AI truly. And because we’re decentralized, because we’re run by a large number of people distributed across the globe, you can’t cut us all off. We were each like the head of a Hydra.


Yeah. And, and so this is, you know, it’s something very interesting. You mentioned there, Kara, I totally agree. If you want unstoppable applications that are AI built, built on artificial intelligence, I think that we’re providing that. And for the first time, that’s coming out with this new prompting network in a way that is very expressible and understandable to the general audience. Like we’ve touched into the real world. So anyways, Rob, um, that’s all I’ll say on that. And I’ll pass the, I’ll bounce pass it over to you again.

Carro (17:43):

Yeah, no, I think that, uh, uh, I mean, you’re hitting the nail right on the head there. I mean, the something that, something that, you know, also, um, that I I’ve been discussing with some community members, um, you know, that, that we’ve seen very recently, um, is plugins with, uh, uh, uh, you know, uh, the, the chat GPT plugins, right? Where the, the model, all you do is basically just give it a manifest, like a schema, dictating in natural English, what you want it to do. And then it writes the code and executes it for you. It’s really interesting, um, concept. And what’s so cool about this prompting network is that that will work out of the box. Like this, that is something that is built in directly, um, not built in directly to the prompt network, but the functionality you could, you could recreate that using this prompt network. So the, you know, the, the, the, our goal is to encompass all of machine intelligence, whether that’s images, whether that’s audio, whether that’s video, whether that’s prompting, whether that’s next token prediction, uh, uh, you know, and, and any other type of artificial intelligence that, that comes out in the future is going to be available on top of BitTensor because BitTensor is like the neural internet.


And, uh, Jake said it, said it really good yesterday. Um, you know, we’re, we’re like in the, you know, 1999, uh, with the internet. And right now it’s really hard on BitTensor to, to go and find the exact type of information that you’re looking for, because it’s not exactly clear how you would do that. Um, you know, because this is, it’s just never done before. Um, and with what we’re doing here, uh, with BitTensor, you know, particularly the prompting network and delegated validators is that we’re working on something very similar to like what Google did with PageRank. Um, and so, you know, we’re making, making it not only searchable, but accessible. So, uh, not only like, is the types of delegated validators that you can build really interesting, whether it’s just a simple reward model and you’re, you know, you’re, you’re setting weights proportionally to the rewards, you know, you normalize that, you do all that, that good stuff to it. Or if you want to make it super complex, like where you query every single UID for a few tokens and then, uh, you know, have a, a learned model, like an extra linear layer at the very end of the model that, uh, you know, selects which tokens to use based off the responses that you’ve got from the model. Um, you know, there’s so many really awesome ways that you can harness this application of not harness this, um, on the AI side, but also harness this for business app, business applications. So, uh, unstoppable, unstoppable ones, ones that cannot be gated by people in their ivory towers who think that they know better than you. I mean, it’s one thing. It’s one thing where we have something like liquid democracy, like what we have here, uh, in, in BitTensor, where I imagine if a validator did something like this, where they were gating their, their knowledge behind, uh, because they knew best, you know, you’re going to see inflows and outflows of the staking based off of how people, you know, feel about that particular, uh, uh, validator.

Jacob Steeves (21:25):

Um, and, and Rob better yet, and Rob better yet, you don’t even need to go through the validator. You can circumnavigate them and talk to the network directly. Right. The validators are simply client applications to make it easier and more expressible, um, to use more user facing, but the network is open to anyone to developers out there who know how to access it, go in and use it. I mean, it is incentivized. There is an economic barrier for sure. Um, but that comes with the territory. So yeah, no, you’re right. Or Robin and, and, and I like the analogy of, of indexing, right. The with, with Google and, and, and how the ability to index the web, their algorithm for doing that page rank, which unlocked the need to curate information for the user that Yahoo had done and failed was the killer app for them. Yeah. It was simply an indexer, a ranking method. So we’re ranking the neural internet. That’s what validators do and they provide front ends for their users, slightly different, you know, and analogies can only take you so far. Um, so anyways, I mean, Rob, I mean, is there anyone on this call who’s actually using, I’m really curious, I guess people can’t answer this, but put your hand up if you’re using the front end demo right now, um, or in commenting in the, in the channel below.

Carro (22:47):

Yeah. Let’s bring some, let’s bring some people.

Jacob Steeves (22:50):

Yeah, for sure. And like, well, okay. So people can come up and in the meantime, I’ll explain a little bit more about what makes this so interesting. Like from the front end, it might not seem so amazing, but what Robert has built here is effectively an unstoppable application, AI front end.

Ala Shaabana (23:05):

Um, it might make sense to talk a little bit about how this, um, really sets down the path for like to build things on top of the tensor for anybody, not just for us or for anybody else.

Jacob Steeves (23:17):

Yes, absolutely. I mean, what, what Rob built here was something that he did in, well, the front end took a little while, um, but we’re building the tools so that you can plug in with these components so that you can plug in with these components, you can build your own chat tensor application. And we want to make that so easy that you can build it in, in a day, if not with a single Docker compose up with your valid or head key, you can run your own chat tensor. And that’s what makes this really, really, really unique. Um, so

Carro (23:47):

and yeah, yeah, definitely. I mean, and that’s actually something that is really exciting. Um, you know, I I’ve been working on it. It’s, you know, I’m still, I’m still working very diligently on that. Um, but my idea is that let’s say you’re a delegated validator, right. And, uh, you know, you want to have your own, uh, uh, like application, like, let’s say you want to do something like what perplexity AI has been doing. Um, they’ve been doing something really interesting where they combine the, you know, ability of, uh, I don’t know if it’s barred or, or if, I’m not exactly sure what API they’re using. I’m pretty sure it’s just a single one though.


Open AI. They’re using open AI and then giving the ability to, uh, quote the sources on where it found that information. Let’s say you want to build something like that, right? Well, what you’re going to be able to get out of the box with what we’re open sourcing. Once we launched the public, uh, the public prompting network that’s incentivized is you’ll just do Docker compose up and you just, you know, have a config where you specify, uh, your, your cold key, your hot key. Um, and then these different API keys, whether that’s, you know, Stripe API key, um, you know, the Coinbase pay API key. Um, you know, we also can, or, you know, you can build your own proprietary payment methods in there as well. Um, you know, because this is all just a part of a Docker compose.


And then now what you have is a rest API that’s gated behind that, that has like a database for users. And then, uh, you know, the ability to generate API keys, and then you can also get a web socket or a rest API. Uh, you can have both running or just one, and then you can build whatever type of front end you would like. And boom, now you have a business that’s, that’s like perplexing AI. That’s like a, what a lot of these, uh, applications you’ve been seeing that it’s unstoppable. And it’s, it’s, you as a, as a delegated validator, um, you know, the, while the, you know, the cost of, you know, there, there are costs there, um, as, as to being a delegated validator, you can also accept money as well. And so the idea is that you’ll accept money more than you you’re spending. And that difference there is going to allow you to hire people to make your product better. And then, you know, you can start growing the amount of people who are using your service. So then, you know, boom, that’s a flywheel right there. I mean, just, you know, going right off and, you know, you have a, a business and, and there could literally be hundreds of businesses just like this on top of the BitTensor network.

Jacob Steeves (26:49):

Right. And Rob, um, we also have a dedicated.eth up here on the stage, uh,

Carro (26:55):

dedicated if you want to. Yeah, go ahead. Yeah.

dedicated.eth (26:58):

Oh, Hey guys. Um, thanks. I’m just trying the app right now. It’s pretty interesting. Um, it’s, it seems to be answering general questions pretty well, but, um, when I ask it to write code, it doesn’t seem to load. It’s just like ongoing loaded. And I did have other one other question about the, um, um, how can we’re doing like censorship of, um, questions right now, like making certain questions like off limits, because I thought this was like more of like a decentralized sort of thing.

Jacob Steeves (27:27):

Wait, why? Where is that?

Carro (27:28):

It’s a, it’s, it’s, I, I probably know why, um, that will, so this is a demo, right? We, we, we kind of hacked this together and, uh, really quickly. Um, we’re using a reward model right now, um, that can promote this type of behavior. I, I, I, first off, I’m deeply sorry, uh, for that, you know, there’s a, I mean, right now we have, you know, uh, over, over what looks like over a hundred concurrent active, it could be dossing for on your, on your code issue there. It could just be because, um, yeah,

Ala Shaabana (28:06):

I think it might be a load thing. Cause it’s a load thing. Yeah.

Carro (28:10):

Yeah. It could be a load.

Jacob Steeves (28:11):

So that one, just to answer your question specifically, we’re, we’re not censoring anything here. If there is a censorship that’s occurring, um, it’s, it’s because the end points themselves are, are being dossed, um, or the end points themselves are choosing to censor your transaction. So we wrote them all. So we know that that’s not the case. Um, but that kind of raises a point, you know, about what, you know, how censorship can happen inside BitTensor, right? So yes, we could build, so we could build chat tensor to censor certain types of questions and queries. Um, we didn’t, um, but also the miners themselves could censor your request, but there’s competition there amongst the thousand UIDs we have right now. Um, not all of them are hosted, not even nearly, not even nearly close to all of them. Um, they could themselves also, uh, not answer your query. So there’s, there’s censorship, but the, the, the way that we get around the censorship is with decentralization. Um, like there’s an issue.


Yeah. So, so like on,

Carro (29:14):

on the validator right now, um, you know, or, or on the validator, right. You know, you’re going to be able to, like if there, if for example, like you’ll be able to train your own reward model from the prompting network, using synthetic data from the prompting network. This is actually another really cool app. So it can generate synthetic data, uh, you know, and then you can train your reward model. And let’s say you want to, your reward model to be such that, um, you return the least censored or the least biased or the least, whatever, whatever you want to specify, um, as the validator or, or as just a member, having a hotkey that’s registered in the network, um, you know, you can choose based off those, all of those responses. Right.


So getting, getting a censored message right now, I’m deeply sorry. This has been very frustrating to deal with. Um, we’ve been trying to deal, you know, we, we have some, uh, some beta testers that, uh, are part of the community that are helping us eliminate all of these biases and all of these rejections. Um, hopefully it was more verbose in its rejection. Um, because before it would just be like, like, please stop asking me, you know, why, like it was, it was very, uh, it was a lot more, uh, unwilling to, to respond to certain types of questions. And then, you know, then, you know, we w we took it too far or it would respond to too many types of questions. Like, like it would respond incorrectly if it didn’t know that sort of thing. So, um, it’s a, it’s a very delicate balancing act and we’re, we’re, we’re still working on all of this technology. Um, but getting it into your hands and getting into the hands of the community, I think is what, how we’re going to solve that issue of, of, you know, what, what, what I think a lot of people right now, um, are frustrated with other model providers about, so.

Jacob Steeves (31:17):

Dedicated to the set. Does that answer your question?

dedicated.eth (31:20):

Uh, yeah. So eventually it says it’s a train to decline inappropriate requests, right? Right. Okay. Does that mean all requests will be declined or?

Jacob Steeves (31:29):

No. All right. So I think, okay, so this is, let me explain a little bit more about the mechanics of what happened there specifically. So we have a number of endpoints on this test network. Um, we have a Rob’s model. We have a number of other APIs on here. We have some local hosted models. We have some random models.


When you query chat sensor, which is the front end, that front end queries the network and gets a plethora of responses and then ranks them. So some of those models may not have censored your request and that’s by design because if a model stays open on BitTensor, they’re going to perform better than if they remain close. And that’s that incentive aspect to the network.


Just so happens that the front end selected your sense, the censored message from one of our endpoints and then serve that to you. So we didn’t censor you. The endpoint we chose to leverage your message or to service your message, pardon me, was the one that censored it. So if we were to open up the, the, like this box and look inside, you would see a number of responses. It just so happened that we picked the sensor one, which was a mistake on the model that we, that we, we wrote the reward model. Right. So, so, but this is, this is, you know, precisely what we’re trying to get around here. There’ll be some content that some providers, some miners don’t want to service and they’ll lose out in this competition because if they censor messages, they’ll get a low reward in the incentive mechanism. So we, we force open the door with the mechanism. Okay. So there’s your, there’s someone else. Let’s go up on the stage. I’m going to bring up. Thank you for your question. Yeah, that was a great question. Thank you very much.


All right. So we’re getting Mohammed. Hey, Mohammed.

Carro (33:35):

Hi there.

Jacob Steeves (33:45):

Hello. Okay. I think he left. So, all right. Well, maybe he lost his internet connection. I know what it’s like. So why don’t we close it off then?

Carro (34:01):

Oh, here, here. Do you want to do a, let’s invite a, uh, uh, Dickie, Dickie Emerson. Would you like to come up or Terrence or Yuma route? Why are dread Dr. M. Okay. I kind of want to let there’s still, there’s still people kind of trickling and using the demo a little bit. So

Jacob Steeves (34:23):

yeah, possibly. And also, I mean, there might be some people in this call that would like to talk about what they’re doing. Like Timo, I know you’ve been doing some amazing work out there. Uh, this can be your stage as well. If you want Dr. M you’re welcome up.

Carro (34:34):

Uh, Sardar Sardar is clapping. Um, Sardar, you want to come up?

Jacob Steeves (34:42):

I think he’s clapping for Timo. Oh, Timo, Timo, you want to come up in Timo stead? You might not want to come up on the call. Okay. Yeah. But you know, I’m going to call him out and say, you know, he’s super, you know, huge thanks to Timo. He’s been doing a lot of work behind the scenes, things that we can’t touch as the foundation. Um, he’s been doing a great job. So kudos to him. Why don’t we just, why don’t we wrap up the call? I think that’s it. There’s no more speakers here. I mean, the demos out there, people will see, you know, I think it’s really interesting. Um, we’ve gone a long way in one week. We’ve gone from local model hosted at the front end to, you know, we have a whole prompting network being tested and this is going to come at the end of April. It’s going to really, really, really blow us up as a project, I think, because we’re going to see, we’re going to be the first censorship resistant, um, first censorship resistant API for chat in the world. And we’ll be able to see what we’re creating, what we’re building, you know, as a community in a way that’s much more visceral and understandable, comprehensible to us. So for me, that’s what making me really excited about this. You know, I can’t, I can’t, I’m holding, hardly holding back my excitement on it.

Carro (35:49):

It’s been a lot of fun. It’s been a lot of fun working, working with, uh, the team on this. It’s been, it’s been great building it. Um, you know, the, the, the team that we have, I’m standing, I want to shout out the team real quick. Uh, the team we have, uh, standing behind us, uh, are, are, are some of the most incredible human beings I’ve ever had the pleasure of meeting. And, uh, you know, they worked tirelessly to build all of this together and put all of this together. Um, whether it’s the Synapse team or the Nucleus team, um, you know, the, the Cerebellum, you know, or, you know, our team Cortex, um, you know, for, for, for, you know, us three up here, you know, we, we have tons of people standing behind us, supporting us and, and, uh, you know, big, big shout out, big thanks to them for their, for their continued support. Also, I want to shout out, um, the community, um, particularly the chat sensor, uh, community committee, um, they were invaluable in helping, uh, debug and, and, you know, find bugs and find biases and, and, and really, really test so that we can make the some of the best things that we can, we can, um, and, you know, without the community, um, and without the team, you know, a lot, none of this is possible. Um, and so, you know, big thanks to them for, for, you know, making this possible, um, and so, you know, big thanks to them. Um, so yeah, uh, I’m going to leave the, we can leave the demo up till maybe five 30, six o’clock central time. Um, and then we’ll take it down. So yeah, yeah.

Jacob Steeves (37:23):

Right. Thanks Robert. Uh, thanks everyone for coming. Have a lovely evening.

Carro (37:27):

Bye guys. BitTensor on three, one, two, three. BitTensor! Good job, good job everyone. Yeah. Thanks Rob. All right. Bye guys.

Video Description

Disclaimer: The subtitles you are about to read has been generated through the use of artificial intelligence. Although we make every effort to ensure accuracy, there may be some minor errors and slight variations.


Donations: 5DHYPY9P8VBSpp6Q8m8hHrq4ME5UTiXfXVFbH32MNV4JcSxH

Bittensor is an open-source protocol that powers a scalable, globally-distributed, decentralized neural network. The system is designed to incentivize the production of artificial intelligence by training models within a distributed infrastructure and rewarding insight gained through data with a custom digital currency.

Discord: Website: Whitepaper:… Network Map:

Socials: Bittensor: Jacob Steeves (Const): Ala Shaabana:



Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.