Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present.

The Kusamarian (00:00):

Well, woken up const, is it const, right? Is that a good way to say it?

Jacob Steeves (00:04):

Yeah, that’s it, const, like the variable, you know?

The Kusamarian (00:08):

Yeah, that’s what I was thinking, but I wanted to be sure there, but people will start funneling in here, so maybe we give it a couple more minutes, but really happy you came by. Happy to be here. I heard you guys had just had a community call. Is that right?

Jacob Steeves (00:24):

How’d it go? Oh, yeah, we had a community call. We were just talking about some of the parameter changes that are coming in, you know? Every two weeks, we do a call, talk about what’s happened over the last little while, and discuss, and then take some questions from the community, and generally, they’re pretty pointed and well-knowledgeable questions, because people are using the software and being like, why doesn’t this work? And opportunity to yell at us, so. And it went well, though.

The Kusamarian (00:56):

That was good, man. That’s one of the things about Web3, and I feel like is, you know, community is super involved, and sometimes that comes with a whole entire, you know, headache of its own, I guess you could say, but I think it’s also, you know, part of the decentralization thing, right?

Jacob Steeves (01:12):

It goes two ways, definitely. I mean, you’re part of a nation state, and you have people in your community didn’t even know you needed in your community, and then they’re there solving problems for you, and filling in the holes. I think that if you run a traditional company, you just don’t have that visibility, because people are not incentivized to do any work for you whatsoever, so you’re like, oh, well, look, we have a whole marketing department that just popped up over here, and they’re incentivized to do guerrilla marketing for you, and then a really fantastic.


It’s not something that I would know how to implement myself unless I hired, you know, an expensive marketing team. So I think, like, that’s one of the beautiful things about just sharing ownership with people. It really allows them to express their value into a community, and a normal company doesn’t have that.

The Kusamarian (02:13):

Yeah, I fully agree. I mean, I think Jay could also speak to this, too, with, you know, we have a, kind of like a, you know, a decentralized news media thing that we’re trying to help build up in WAG Media, and really, it uses the Kusama token, right, to incentivize people to create content and to find, like, information out there, and I agree with you 100%. Like, people will take ownership of, you know, their own work and really just come up with some pretty incredible things and just take off.

Jacob Steeves (02:45):

Yeah, and it’s, you know, I know that you guys want to talk about ChatGPT at some point today. Been reading their research for a little bit, so we knew it was coming along. We didn’t know it was gonna be as good as it eventually was, and we’ll talk about that, I’m sure, at some point, but they had to do a lot of labeling in order to get their data set up to speed, and because we’re playing with incentives and where we have a nation state behind us as a project, we don’t even really need to go to, you know, for instance, like the Mechanical Turk, and we can build those kind of tools right into our community and hopefully leverage, you know, the power of the crowd. I mean, that’s part of our intention when we want to hit that benchmark as well.

The Kusamarian (03:35):

Awesome. Well, you know, I think that we got a decent crowd kind of building up here, and as we go, I’m sure more will come in, so we’ll just go ahead and get started, if you don’t mind, man. Let’s do it. So, yeah, everybody here and everybody gonna be listening afterwards. This is Summit Nights. You know, I’m C-Stan here with the Kusumarium. We got Jay up here, too, and a lot of our regulars. Welcome, everybody. Love it, love the sound effects. I do want to make sure to, you know, let everybody know that this is completely open space, meaning, you know, I will have questions, of course, that I’ve prepared, but especially anybody from the Bitensor community, anybody in the crowd that has any questions or wants to just jump in the conversation, feel free to request up. We’ll get you up here and get going. So, to start us off, we got Konst up here, one of the co-founders of Bitensor, and an active developer, as well. He wanted to make sure we highlighted that. He’s definitely a big part of the community, still, you know, still working at it. So, I just want to start with, Konst, if you can tell us a little bit about yourself and kind of, you know, your role with Bitensor and how you guys came up with the idea.

Jacob Steeves (04:42):

Bitensor, you know, my girlfriend says, Bitensor, it’s a bit of a hard one to say, but I think have a, at least a mild understanding of artificial intelligence, or if you’ve been in the crypto community for more than one year, you know that it’s BitTensor, right? Instead of Bitensor. And I have my roots in both fields. So, it was definitely something that was an obvious connection in my mind when I was, I mean, I’m a trained, but I was also very much attracted to the power of decentralization, specifically things like Bitcoin, through my university degree. And then, you know, after my university degree, I got very fascinated politically and also, you know, technologically in Bitcoin, but it was interested in it for, you know, reasons aside from merely the political. What I saw with Bitcoin was technology that had created the largest supercomputer in the world, as, you know, at the same time, building a censorship-resistant technology that allowed us to, you know, have freedom, money, and all those things that are very important and underlie the amount of power that computation that is, so, underlies what they’re using that computation for. And I think that there’s a lot of people that are also interested in that question.


So, how did Bitcoin get so large? And, you know, they were talking about it at Neural IPS. I was just at Neural IPS last week, and we were part of the decentralized AI workshop there. It was very interesting. And there’s a lot of researchers that are, you know, have been and are looking at peer-to-peer technology as this way for soaking up all of this computation, which is latent and unused, or is being incentivized into existence.


That’s really what Bitcoin is doing on the flip side. And so, I wanted to take that and turn it towards doing artificial intelligence. So, that’s where, you know, I came into BitTensor, and found this project, and wanted to be a part of it, and wanted to build it. And, you know, I’ve been at it for many years now. The question is always, how are you gonna do that? How can you leverage the same way that Bitcoin leverages computation, but to do artificial intelligence? And, you know, our thinking is that, number one, you need to figure out a technique for machine learning that is distributed or decentralized. So, what is a decentralized technique in machine learning, and how can you then map that onto a peer-to-peer network? So, we use a mixture model style approach, where the model itself is sparse and split up. And then, we can stitch it together from, and build models on top of it. So, that’s a mixture model, and that’s the approach we went with. Sort of joining together sparse networks into larger ones, which are better than the sum of the parts. And that’s a little bit like the sparsity of the human mind. So, anyway, so that’s one, you know, a tool in our toolkit. The second, I think, more important one is how do you validate machine intelligence?


How do you, creating it, and who’s not creating, or who’s doing work, and who’s not doing work? And in the case of Bitcoin, it’s very easy to do that, right? Because evaluating whether or not a hash is properly hashed or not is like all of one operation, right? You look at it, and you say, okay, great. In Bitcoin’s case, you actually just need to count the zeros at the beginning of the hash. With machine intelligence, it’s much more fuzzy. It’s much more difficult to discern and separate the non-intelligent from the intelligent. But you can still do it. And, you know, when I was working at Google and also studying machine learning before that, we learned various techniques for determining what is useful and what is not useful, and that’s the study of salience methods, they’re called. It’s the methods where you can determine whether or not, for instance, a neuron within is useful. And if it is useful, then you don’t want to prune it, you don’t want to change it, and the inverse. And so we basically combine those technologies together with blockchains to build incentive mechanisms for distributed, decentralized, highly sparse neural networks. And then Bob’s your uncle, that’s what BitTensor is. That’s what we’ve been doing for the last, you know, two years, coming up on the set. And, yeah, it’s been growing like crazy and really exciting to see how this technology is exploding and the people in this chat are, you know, making it a reality with us.

The Kusamarian (10:06):

Yeah, it’s incredible. I mean, the way you describe that is a really good picture. I think using the human mind is a great way to think about it. And you talk about like, you know, pruning those neurons and pruning those models that are useful versus non-useful. When you say models, are these like individual nodes? Are these like people who are deploying their models from their like, you know, personal, whatever they’ve developed themselves? Maybe if you could expand on that a little bit.

Jacob Steeves (10:32):

It can be, it can really be anything. In practice, people just host the best model that they can, right? That’s the most efficient market strategy for BitTensor right now that I think people have discovered. And there’s even some miners in this call that I’m sure you can bring up to the stage here and they’ll tell you better than me.


But really, these endpoints that we’re stitching together are just functions, they’re generalizable functions. They don’t need to be whole machine learning models capable of doing anything. And they just need to be able to do this translation from inputs to outputs. We call them neurons through like a biological analogy. They could be a single layer, they could be an individual behind a keyboard, they could be your… I mean, obviously none of those things would work.


And in practice, you need computation, like heavy computation behind these endpoints that can do the inferencing, like to actually run the function, no matter what it is. So, in practice, they are machine learning models and they’re hosted on, I would say, enterprise level software, most of the time, the VPSs, people’s home computers, and capable of taking textual inputs in BitTensor’s current implementation, current iteration, we’re just working on text. So they’re able to do that. Textual text inputs and produce outputs in two ways, really. The first way is by producing just raw representations, which is just a set of numbers that represent the text. And then also logits, which are the predictions for the sentence, which allows you to do things like generations. So it’s just kind of like tensor type outputs that are slightly different.


And then the second way is just like, type outputs that are slightly different. And that’s where the name comes from, BitTensor, being that the output of these models are, the output of these endpoints is tensors, and we can use those tensors to do all sorts of things. We can build applications on top of them. We can train from those outputs. And I’m sure I can get to more of that later if you have a question.

The Kusamarian (12:50):

Yeah, I think that kind of leads us into, maybe if we can parallel this a little bit, for the people that may not have the exact background of artificial intelligence and having a familiarity with it, if we could parallel maybe to OpenAI and this ChatGPT, which everybody’s pretty enamored with, how do you parallel BitTensor versus OpenAI and apps versus ChatGPT and such?

Jacob Steeves (13:17):

Right, it’s amazing what they’ve done. And they’ve really, really, giants to build this application. They’ve reached into, I would say, like kind of unpopular artificial intelligence domains, number one being supervised learning and also reinforcement learning, and use both of them in conjunction with unsupervised learning, and I’ll explain what those mean in a second, to build this really, really interesting technology, this really, really interesting application. So they’ve like pulled together all sorts of different types of machine learning to build this amazing tool.


Now, so all of the things that they’ve built though, the ChatGPT, for instance, this application, rest on top of language models that are pre-trained. So what they did is they took their GPT line of models and they fine-tuned them to examples which have been generated from a set of like 40 or so in labeling and answering prompts. So if you use the GPT-3 model just by itself, and you ask it to say generate, and okay, you type in a sentence, and it’ll try to complete it. It will try to like, you say the cat went to the park. It will then say, and it walked past the dog and it was chased, right? Because that’s the natural completion of the sentence, but it’s not what you need for an actual chatbot, right? So what they did is they took that model, able to complete sentences as if they were naturally occurring, and then fine-tune that. So then took some labeled data sets and they generated those labeled data sets from individuals and used those prompts to basically train it even further to specifically solve the task of being a chatbot. So they made it useful with the supervised data set. So how does this connect to is working on just that initial stage. Like, okay, can we complete the sentence? Can we complete the sentence? That’s the self-supervised or semi-supervised problem of understanding language. And it’s usually called representational learning or pre-training.


And we work on that because, I mean, chat GPT is a good example of a technology that’s built on top of that, right? So they, the power to train these really intense GPT models. And then they took this fairly bespoke, interesting, you could say like a machine learning technique, supervised plus reinforcement learning to fine-tune it, to turn it into something that was really useful and amazing for people. So to get back to the question, which was like, so how does FitTensor fit into this? Like we’re focusing on that unsupervised layer and making sure that we can match the GPT-3 styling. And at the same time, we’re designing and dreaming about what we can do to perhaps even build something like a BitTensor, a chat BitTensor on top of our level, on top of our foundation, so that we can, you know, have that in our Discord or build an application to showcase the technology to other people and get people interested in what we’re doing.

SPEAKER_02 (16:54):

I got a question here. So when you introduce blockchain into this equation here, what is that unblocking? And also what kind of obstacles does that present as opposed to like the chat GPT thing? It doesn’t seem like you’re using blockchain at all, right? So what’s going on there?

Jacob Steeves (17:11):

Right, I think, I think, like the best way to understand what we’re doing with blockchain is that blockchains allow us to open up closed systems, bordered systems.


A company, it’s a company which you have to apply to and is controlled by an HR department. And, you know, as I mentioned at the beginning of the call, they have to pre-think about what they want to do and they have to pre-think about what they need and it doesn’t just get solved by them because there’s a whole bunch of people that are inside their community just solving problems for them. When you use a blockchain and use a permissionless system and use an open system that’s governed by incentive, you attract, solve problems for you in really efficient ways. And you attract, in our case, computational power that is maybe not accessible to us if we didn’t do it in a really efficient way, right? There’s all these Ethereum miners out there which have GPUs. Now, people have heard this point many times. I think it’s well talked about that you can, if you can attract the Ethereum miners and those GPU resources, you’ll be able to get a lot of compute. And that’s definitely true.


What we’re using the incentive for is to stitch together a whole bunch of resources into a supercomputer which is gonna be on par with and maybe even exceeds what a company like OpenAI has at their disposal. I mean, that’s a billion dollar problem because they have a billion dollars from Microsoft and a lot of our competitors have hundreds of millions of dollars of funding. But our claim really is that we can get bigger if we stitch even more together, right? If we build open protocols, Bitcoin being a good example, we can get really, really big. And then I’d say on top of that, if you use markets and I’m really a strong believer in the power of market dynamics to help with certain things. And one of those is to get people to produce commodities cheaper. And what we’re doing is representing this thing that we really need. We really need representational knowledge. We really need computers which can solve these micro problems together.


So if we build that into a market, we can leverage the innovation of this open, decentralized, borderless worldwide community to do it even better than maybe a centralized bureaucratic one can do. Now that’s, I’d say the core value proposition of these open systems. Now then there’s a plethora of ethical ones as well, I think. I’m a crypto anarchist.


I really believe that if we can build open systems, then we can distribute ownership and govern them transparently. And that’ll be really, really good for us to control something that’s as powerful as artificial intelligence in the future. Now, right now we don’t have the best in the world, that’s open AI. But I think if we can compete with them, it’d be really good for everyone involved.

The Kusamarian (20:20):

Yeah, and that was one of my huge questions once we got into open AI was a lot of these, it’s not like AI is brand new, right? And AI has been around for a while. And when I was doing some research on the video that we did for you guys, Google’s search engine itself is AI baked in through and through, right? And it’s controlled by Google. They leverage it for their own gain. And they even monetize that in certain ways. And so I guess the other thought there is, open AI, even though they’re not maybe doing that right this moment, that can also be, it’s controlled by a central entity and can be guided and the weights can be changed, the input and so on and so forth can be changed by then and censored by then.

Jacob Steeves (21:04):

I wonder what your thoughts are on that. Yeah, absolutely, that’s a new domain of censorship potential where you have all these tools that people are using in schools and across the world to get information. And that’s being patched by an organization, which is behest to its point or one point or another. It’s really it answers to them and not the crowd. And I think that that’s one of the beautiful things about Web3 systems is our ability to, with proof of work, push, pull in, I would say, pull in people that would formerly have no ability to get on the board of directors.


And here they are effectively being the board of directors of this decentralized project. And I think that’s a really good thing. It’s a different power base. It’s a different, it’s like a democratic power base. Now, so the censorship thing is a big one, right? I really can see in the future, a lot of people using this technology, OpenAI’s chatbot and you say, hey, like, what do you think about this? And they’re like, well, according to experts, X, Y, Z, maybe that’s a good thing.


I think that there’s definitely good reason to have a hierarchy or an ivory tower with knowledge, but it shouldn’t be the only viewpoint in the world. And I think that as AI becomes more and more integrated into our lives, we should just be really careful about who controls it. And we should also have alternatives. We should let the free market really choose. And there shouldn’t be a monopoly on control of that. And so we built it so that the people that actually hold a certain amount of Tau can, they could very easily leverage BitTensor to create their own version of these technologies because they’re holding Tau. So we’ll have multiple chatbots built by multiple people that you can use if you want to. And that’s the way that Twitter should have been built, in my opinion, a long time ago. So that’s one. I think another thing that I think is about the revenue stream and the sense to look at the technology like ChatGPT and go, holy fuck, my job is at risk here. Like, oh my God, this thing is scary good. This thing is gonna replace me. I better get a chunk of this. I better latch on somehow. I better tether myself to this. I better tether myself to this technology in some way that I’m not forgotten, right? Either by working on it or owning it.


And we provide all of those avenues to people, right? You can, if that’s the case, and we’re gonna build it in BitTensor, you can own it, you can validate it, you can control it that way. You can actually control the hardware that it runs on. You can really, really get yourself integrated into this technology to make sure that you’re not left behind, so to speak. That’s more sci-fi, but I think it’s a reasonable argument.

The Kusamarian (24:27):

Oh yeah, I’ve seen already that there’s been tweets out there, right? Like, if you’re not using ChatGPT already, you’re behind. You know what I mean? And maybe that’s a little bit aggressive right now, but I can definitely see it start to infiltrate the way that people work, right? Because it is such a big augmenter of workflow, and like you said, can possibly replace a lot of people’s jobs. But Dr. M, I see you up here, welcome up. And I saw you in the chat down below said, BitTensors change your life. I wonder if you wanted to expand on that a little bit.

Dr.M (25:05):

Sure, thank you so much for having me, and thank you for allowing the discussion about BitTensor. Absolutely, yes, in so many different ways. You know, not just in one way. I feel it’s been that way intellectually, materially, and also just in terms of the general outlook of the future. So yeah, just wanted to say a couple of things, and I’m far from the only one. You know, I, in fact, am very relatively new. A couple of months into mining BitTensor, which is the process of serving AI models at a node.


And you know, every time I talk about BitTensor, I get way too excited. But it’s really because it’s tackling, you know, probably one of the most core problems of AI, and that’s that this is more powerful than many of the tools that humanity has ever had before. And it’s just simply unreasonable to expect that when in the hands or control of a few, that that would go in the best interest of the many. And so this is, you know, a software protocol that really tackles this in a way that works for the first time. And so I just find that really groundbreaking. And yeah, I mean, I’ll just leave it there for the moment. But really, you know, what I find myself, not really being able to talk about AI anymore other than to talk about BitTensor, because I just find that, frankly, nothing in this space is quiet as groundbreaking as what BitTensor is doing. And I think that the world is kind of, for the most part, just doesn’t know yet. And I look forward to the future, and there is very real value building here, you know? So we all know that AI takes some training to get better, to get more intelligent, and that training is resource-intensive. We also, you know, it’s already well-demonstrated, and, you know, how that’s a very useful thing to do. On BitTensor, I, as a person who is not hired as an ML engineer at a relatively high-caliber position in a company like Google, have a chance to do this, and that’s extraordinary. Nowhere else would that be possible.

The Kusamarian (27:38):

Amazing, amazing take. That’s, yeah, I mean, I have no background in AI, but when I was reading about BitTensor, you know, I myself was getting pretty excited about it. It was a really great, incredible idea that’s working today. And, GP, I see your hand up. Wondered if you wanted to add a few thoughts.

GP (27:59):

Sure, thank you, Chris, Marion, and Jay. Appreciate you giving me the mic, and hey, Dr. M, we keep running into each other.


Also, Konst, I just had a point to make just when I’m finished with my first point, but while my primary is in computer science and post-grad is in AI, I’m not on the programming end of AI as such. More on the ethics, societal change, bias, and the importance of the democratization of the AI development in our society, especially its removal, or should I say, at least its ability to extend beyond the reaches of the incumbents right now in Web 2, such as Google and Facebook and everybody else who contains the data set training from the people that are using their platform. So I think BitTensor is revolutionary. It is, as Dr. M says, allows people to participate, and I’m a huge fan, super bullish on the whole idea. I think it is the only way to maintain a democratized AI environment and to create use cases that are in the best interests of the majority and not the minority. So yeah, kudos. It’s the way to go. I think we have a problem in blockchain right now where the call for regulation as a result of Celsius, 3Rs, Capital, Docquan, SPF, and Alameda has people calling for regulation, which runs the risk of bringing the same old people from the Web 2 IRL regulation into Web 3 and just imposing the same old rules and vested interests upon us. So from an ethical bias, social inclusion perspective, I’m a 100% supporter of this model of AI training and contribution and democratization of AI. And I just want to say max respect to everybody and anybody, but including, most including Konstant, the team for working out these initiatives. They’re the future of making a positive AI environment for us all. Thanks for listening.

Jacob Steeves (30:24):

Thank you so much.

The Kusamarian (30:26):

Yeah, thanks for those thoughts there. You know, and something that you brought up there was having it in the hands of many and something that could be, something you alluded to was it being unstoppable, right? And when you talk about the power of Bitcoin compute, right? You speak, you know, an entire country shut down mining in China, right? But Bitcoin just kept on chucking. And, you know, if you think about that from a BitTensor perspective in an, you know, an AI model, it sounds like that is kind of what you’re reaching for.

Jacob Steeves (30:59):

Yes, now, I think it’s GPS. You know, brought up a really, really interesting point in the last point he made there around regulation and how it’s very likely the hammer is coming down on the space. You know, we focused a lot with BitTensor on markets, you know, stitching together resources, building open systems. I think that’s, you know, all.


But at the end of the day, you know, blockchains actually exist to create censorship, censorship-resistant technologies as well, right? And if we don’t make sure that BitTensor lifts off, so to speak, into the internet, right? Into a decentralized system where regulators really can’t fuck with us, we’re fucked. You know, we worked really hard at the beginning of this project to make sure that, like, ownership was decentralized, and that’s our goal, really. We need to have that ingrained into the technology, and GPS wants to say something, so I’m really curious what he would like to, how he would respond to that.

The Kusamarian (32:17):

Yeah, go ahead, GPS.

GPS (32:19):

Yeah. Oh, thank you. Sorry, there’s a few hot mics, so that was Const speaking right there, was it, yeah?

Jacob Steeves (32:27):

Yeah, that was me.

GPS (32:28):

Yeah, okay, bro. Yeah, yeah, yeah, I worry about that,

GP (32:33):

because as you rightly say, an entire nation of approximately one-fifth of the world’s population whose use of the existing AI in the context of social media has introduced a social index in their country, which is oppressive. Therefore, their use of AI is more than likely going to follow the same trajectory as ban Bitcoin and Bitcoin carried on regardless, so to speak, because Bitcoin is the true blockchain.


I guess the risk about deploying AI models or attempting to democratize AI models on blockchains, it’s super important to pick blockchains that can be, to use the technical term, have the lowest chance of being fucked with by external parties. There are too many blockchains that fall into the category of bino, and I think it’s highly ironic that in fact the hammer of regulation is coming down on us because of the actions of the existing establishment in centralized exchanges, which are diametrically opposed to the concept of blockchain, and the actions of Gensler and vested interests in the SEC and the Fed. So some people could throw on a tinfoil hat and suggest that that might have been a three-card trick to try and attempt to get regulation in the space. So the choice of platform or blockchain or multiple blockchains or the obfuscation of those blockchains when using them as a platform for democratization of AI and model training, building data sets and mining and so forth is super important, because otherwise you’re just putting big crosshairs on your back and someone’s gonna take that pot shot someday and shut you down.

Jacob Steeves (34:29):

Yeah, exactly. And we built in Polkadot because we really liked the security model of Polkadot. And we like that we can also build any type of block transition function. Right now we’ve been focusing on the machine learning aspect of things, but one of the big pushes in the coming years will be to make sure that the actual chain is highly decentralized and we’ll have to figure out a way of incentivizing that, because yes, it needs to be…


God forbid if Ala or myself or anybody else on the team or the foundation comes under legal pressure or physical threats, that the technology doesn’t die. And that’s very likely in this space and we’re aware of that. So yeah, great point, GPS.

The Kusamarian (35:19):

Yeah, and that leads in perfectly to, you’ve already kind of said why Polkadot. Being someone who’s been in Polkadot and Kusama for a while now, we’ve seen a lot of parachains come in, have a specific model where you crowd loan to them, they drop you tokens, there’s collators that offer their own tokens and so on and so forth. And after reading a little bit, BitTensor isn’t really that. And so I wonder how BitTensor sees themselves becoming a parachain and what that process will look like.

Jacob Steeves (35:58):

Well, we are going to win a slot, hopefully in early January and we’ll be launching January 10th on that. Whether or not we get the slot, we’ll have the new chain done by January 10th and so the system will be ready for people to use. We’re moving networks.


The incentivization of the block, what we can do, I think the really simple approach is just to distribute a portion of the inflation to block producers and whether or not that’s 5%, 10%, 15%, I’m not really sure. That really depends on how desperately we need the mining network to secure the decentralization of the chain or how much we need inflation to go into the mining network on the machine learning side.


We can also fork BitTensor into use other different technologies if it really looks like, and I don’t think this will happen, but it is very possible because BitTensor right now as a technology doesn’t require it’s layer one. Like we are agnostic to Polkadot. But I’m a firm believer in Polkadot. I actually really love the technology. I know that it’s been, Gavin’s been working on it for a really long time. And I really respect him as, respect the team and the technology I’ve used now for a while, written in Substrate for instance. And I just know that this is real technology. There’s a lot of FUD in and around Polkadot right now, but I’m not really, I’m not down on the technology whatsoever. So I’m not going to be moved by FUD. I think really what will change over the coming months and years is that people really discover that this technology is legit and they’ll come work with us.


Yeah, there’s a whole bunch of questions. I’m just looking at them and wondering what people have to say.

The Kusamarian (38:01):

Yeah, same. I just wanted to highlight there that, a lot of the things that you guys have been speaking about in terms of hands of the few versus hands of the people, decentralization first, like everything you’re saying there really does seem to share the ideologies of Polkadot, right? And it’s like a match made in heaven sort of thing.

Jacob Steeves (38:19):

And I might say it’s like, people that have been in the space for a really long time, my bias is that they care about existence more than people that have built projects like later in the game, because they’re like, oh, why do we even need proof of work? And I’m like, do you even know what this whole space is? Like, it can be frustrating. And I think that it’s because we haven’t even yet had the regulatory attacks that we’re gonna see, that they’re gonna come, you know? And the people about the core technology of blockchain and its own debts, like, we’re gonna get hit really hard. So I just wanted to mention that. And I think Gavin is one of those people. Gavin is definitely someone that’s been around here for a really long time and understands the technology.

The Kusamarian (39:12):

Absolutely. All right, well, we’ll move over to, I think Alpha’s had his hand up the longest maybe, and then GP and Megan after that.

Alpha (39:20):

Hey man, yeah, thanks for the opportunity to speak. Glad to have you guys here. I am learning a lot. And I appreciate you guys talking about all this stuff because this is really, really impactful. Like many of us, I think I tend to look towards the future of what are the oncoming narratives and stories that Polkadot is gonna be a part of. Three of those kind of come to mind that you guys were talking about, I think is very important, which is the importance of governance, right? For the betterment of the ecosystem and for the people.


Security, for what you guys were talking about specifically here, not only relating to Polkadot, but what you guys were talking about with Tensor. And so like all of this, and I would say anonymity or like the ability to not be, to be decentralized, right? To keep your self-sovereignty. I think Polkadot’s a big proponent of that. It’s one of the biggest things that I look for in an ecosystem. And one thing that I see that you were talking about, you were talking about like, you made the reference to the example of the internet, right? And that, this is something I personally struggle with. We know in the marketing world, a lot of AI tools are already being used. I use some of them. And so one of the things that I see as a problem is that AI tools are only getting better because they’re competing against each other.


And so we know right now that the internet, a fully open decentralized internet, right? It’s not necessarily a good or a safe thing. This is a reason why we keep a tab, like tabs on like websites that teach you how to make bombs, et cetera. A concern that I see in the future, because I’m seeing some of it already being used, is taking modeling from one existing AI that is open sourced and using it in a centralized AI to train it and to make it better so that it becomes better than the decentralized AI. Now, the problem with governance is that it tends to be slow.


And so one of the things that I was thinking about for BitTensor was like, I’ve always had the idea, and I’m glad you guys are bringing this, it’s magnificent, which is like the AI to keep on track has to be better than the next AI. But to be the safest, it has to be decentralized control. Like one entity cannot have that control. I’m really afraid of what Google’s doing because as a marketer, I know that Google controls like 95% of all data. And it’s like, they control everything. And thus having that under one person is incredibly dangerous. But how do you think about that? Then I have a follow up question afterwards.

Jacob Steeves (42:09):

Right, yeah, that’s a really good question. So like, I’m not opposed to regulation, and I’m actually not even opposed to censorship, as long as there’s competition from companies to their own applications, which are more or less regulated or censored than others in the space. A good example would be something like Twitter. It should be possible, in my opinion, that you can just fork all of the data on Twitter. And then I can use Twitter that’s not censored or Twitter that is. And my government can regulate certain Twitters and it’s up to their prerogative in that democratic system.


If there’s no competition, there’s a very high chance of concentration of power and corruption, rather than, there’s nothing pulling it away from centralized abuse of power. So that, and I think that’s that middle ground where you’re not opposed to, like for instance, we released the playground that was trained on top of BitTensor. And we built regulation into that tool. We had a hate speech filter.


Anybody could also have built that tool and not applied the hate speech filter. And that’s what makes it an open system. It’s people’s choice to use the technology how they want. Now, the other thing you said was, people taking AIs which are decentralized and turning them into centralized AIs, which are better, right? That’s actually the, oh, sorry, did you wanna add something?

Alpha (43:55):

Yeah, and the reason is because I’m seeing it where the centralized AI can train faster because it has no limitations based on governance or being open source or whatever the case may be. Like they feed exactly what it needs to be fed.

Jacob Steeves (44:08):

Yeah, that’s a little bit like what I was talking about at the beginning of the call where they fine tune it to something that is more specific to their use case. We built BitTensor to do that innately. Like that is the core technology. It’s like let’s, general knowledge base that then the clients to the system can fine tune it to whatever they want in the same way that you can use the fork of Twitter to make your own Twitter and fine tune it to whatever you want. Twitter and fine tune it to a particular, in a particular way. So the concept of sort of stealing is innate to the technology we’re building. Like it’s inherent.

Alpha (44:53):

Great, and so that leads perfectly to my other question, which is if this is innate in the blockchain and in the open AI, or like the AI that you guys are doing in the network, how does blockchain anonymity, which is such a big part of blockchain, play a role into this? How do you add, I’m guessing you would need to have some kind of DID solution to be implemented so that you’re able to actually keep tabs on making sure that there’s always good faith actors.

Jacob Steeves (45:24):

Well, I don’t think we need those things. I think we already have bad faith actors in BitTensor and the system needs to be robust to bad faith actors or it can’t be an open source and decentralized technology that runs like, or an open boundary, right? There are people in our community, and here’s one of the things that we talked about at the beginning of the call, like it’s a double-edged sword. There are people constantly trying to break us.


And that’s actually like, you can’t, you’d throw the baby out with the bathwater if you tried to remove all of them from your system because they’re actually the most innovative. That mentality is actually incredibly innovative and we’re leveraging it to make the system better.

Alpha (46:16):

So making the system better, what do you see the challenges that brings to the populace?

Jacob Steeves (46:24):

And what do you mean by the populace? Like the…

Alpha (46:25):

I would, yeah, like people using this, for example, being able to use a modeling to do something that is a bad faith actor, whether it was to break ATMs.

Jacob Steeves (46:36):

Yeah, I mean, it’s nonstop. I mean, there’s so many different corner cases that come up and that we’re constantly patting down and learning. I mean, example today, like we had a mechanism for validators to maintain, stay in the network, which was that they’d have 1024 staked on their system. So what did intelligent gamified people do is they were like, oh, great, well, I’ll just get a whole bunch of 1024 tau and fill the network with my miners that can never get deregistered. They’ll never be able to get like that. So I’m sure most of that is just jargon to the people in this call. That turned out to be a bad mechanism. We discovered that this was an approach that people were applying. We now have to change that, right? So it’s really a continuous, and that’s one of the many challenges that we’re solving. It’s like really, really interesting in my job.

Alpha (47:37):

Yeah, yeah, definitely. I mean, this is one of the things that I tend to quarrel over, which is like, if you’re not continuously innovating or pushing it, you’re continuously patching. So it’s like you’re just fighting it. And so I guess, man, one more question, which is if you see this, what is the biggest danger that you would actually say, like, okay, I know all of the benefits. I see all of the greatest things here. What is something that actually I’m like worried about or could possibly happen?

Jacob Steeves (48:12):

That’s a great question. I feel like I’m in an interview. Oh, sorry. Yeah, no, no, no, no, no, no, no. You know, like for a job position, what are your biggest flaws? That’s a really good one. I mean, like we don’t know what would happen if there was, for instance, a large group of extremely well-funded people that were committed to breaking the system inside of our community. I think that’s always the risk, right? The formation of cabals, the formation of groups of people that are coordinated and malicious, because especially in decentralized distributed part to know who’s who, and that’s part of the anonymity. And also to, so how do you detect them? Like, how do you know what they’re doing? And I think like, as we’re pretty small, there’s definitely actors that could take us out. It’s like in the early days of Bitcoin, like it wasn’t censorship resistant against the government until like maybe this year, right? If the United States government could have just bought all the ASIC miners in, you know, 2012, and then it would have died, right? That would have been a large scale coordinated actor. And I think that there’s a certain level of scale and maturity in the technology that we need before we’ll be fully resistant. And we’re aware of that. And that’s kind of like, that’s the thing that keeps me up all night is, okay, what’s that thing look like? What could they be doing to manipulate the incentive or hack the system? And, you know, we have to be aware and there’s unknown.


That’s a really good question.

Alpha (50:01):

Well, I’m glad you guys are here in Polkadot because I think what Polkadot offers with governance too, and the decentralization and the security, I think you guys are gonna, I mean, I’m ecstatic to have you guys here and be part of the eco. You guys have a lot to offer.

Jacob Steeves (50:19):

Thanks, Alpha.

The Kusamarian (50:21):

Alpha, really that question that you had last, I thought you were asking if iRobot is a possibility. Honestly, that’s what I was wondering, but GP, my friend, how are you? You got something for us?

GP (50:33):

You know, I’m going to defer to Megan, the token blonde, because I’ve asked a couple and she had her hand up. So if that’s okay, if Megan still wants to speak, I’ll defer and wait till she’s finished. Absolutely.

Megan (50:48):

Thank you very much. Yeah, actually, I just had a few technical questions, if that’s all right, because some people here in the audience know that I’m a fan of proof of work, and I’m always asking about solutions in the substrate ecosystem that are using that. And I’m just wondering, and I do apologize if I missed your explanation of how proof of work factors into your solution earlier. But honestly, a lot of that stuff just went right over my head because I’m not like a computer scientist or anything. But yeah, I was just curious, and you mentioned that your solution doesn’t rely on a layer one right now. Is that right?

Jacob Steeves (51:32):

No, it is a layer one. I mean, so we have our own chain, if that’s what you mean. And so we have our own block transition function, which is currently a POA, so it’s a proof of authority. There’s no proof of work involved, and neither is there proof of stake. It’s really just a set of validators with their own keys that get to sign blocks. And that’s like so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, a fully decentralized proof of work block transition. And on top of that chain is where we run the BitTensor incentive mechanism, which has its own consensus.


So like, think of it like a single smart contract that runs on, or a pallet, if you know Polkadot, that runs on top of a blockchain, and that’s where the BitTensor incentive mechanism runs. And that’s where we focus most of our research and development, because that’s what makes us unique. Now we can always, because Polkadot’s a great, portable ecosystem of technologies, we can plug and play POS and POW at a later date for the block transition. So right now, the way in which we distribute tokens is fully with inside that pallet, which is a POI, you might say, or it’s like Yuma consensus. It’s all our form of consensus.

Megan (52:54):

Okay, very interesting. Thanks for explaining that.

The Kusamarian (53:01):

GP, did you have something?

GP (53:03):

Yeah, bro, yeah, thank you. Yeah, so, you know, we have precedent for the establishment making moves on new democratized or privatized methods for the populace to communicate in the form of secure email, secure comms, the encryption debates, and the dark web. I mean, the way they started their attack on that was with warrant canaries, not sec letters and red notices.


They infiltrate, observe, profile, plan, and then execute in order to dismantle. And we have precedent for that in HansaWeb and Alphabay, dark web markets.


I think the thing about bad actors in an open source environment is that in the existing infrastructure, my experience working for intelligence services and working for anti-corruption initiatives, the space is full of honeypots right now. They’re observing, they’re looking at weaknesses in the system. Regulation is not here because they’re slow. It’s not here because they’re looking at the actions of people in order to determine what the possibilities are. And I think it’s very much demonstrable by the Pegasus software provider in Israel and their warehouse of zero days and exploits, which they provide to undermine secure comms globally to governments. I think similarly with blockchains, they’re finding or looking for the loopholes and they will act upon them. I think finally, the patent firming, the buyout offers, the attraction of money will allow people like Google to retain ownership of these initiatives.


And ultimately money is what appeals to a lot of people and buyouts appeal to a lot of people. So the obfuscation for corrupt behaviors in some of the case management or developed intelligence systems we built, which were built on blockchain, were specifically catering around identifying bad actors and preventing them from deletion, obfuscation, alteration, manipulation, false augmentation, exfiltration, and willful negligence. And you have to build those countermeasures into your architecture if you want to be robust. But if the founder is Dox and Alpha AI did mention about blockchain anonymity, the unfortunate truth is that when you get on blockchain and you have an address, you are anonymous, but your entry point to blockchain, 90 plus percent of the entry point to blockchain to actually transact is through a KYC AML exchange in a CEX. So it’s quite not as anonymous as it is made out to be. So yeah, the open source and the detection of corrupt behaviors, the ability to maintain independence from either regulatory pressure, money pressure from buyouts or legislative pressure, it has precedent in the areas of secure comms, encryption, dark web, and secure emails. And there has been people who have been the subject of Warren Canaries and NatSec letters who have had to let their systems be infiltrated while the organizations gathered intelligence on the communications. And if the individuals who develop these new methods are going to be Doxed, they are the weak link in the chain because the pressure brought to bear on them to give the keys to the kingdom is the weak point in the system. So yeah, it’s based on experience and I hope that makes sense.

Titus (57:28):


The Kusamarian (57:32):

Titus, you had your hand up, my friend. Did you have something to add?

Titus (57:37):

Yeah, buddy. Wow, wow, wow. Wow, I mean, that is scary shit. I’ve been a victim of that, but I don’t do cryptocurrency or nothing like that or investments. I work for the children, man. I work for the babies in this world. And it seems after I basically gave my life to them and put my head on the chopping block and snitched on some folks, they came after me, man. And when I say they came after me, the government came after me in creepy ways, like on Facebook and Twitter and things like that. And I pissed some folks off so bad. I’m literally, I’ve literally been placed in the algorithm and it seems, I might be wrong, but it seems like we all are programming the AI. You know what I mean? To understand the destructive nature of what this world is.


If truth and love prevails, it’s gonna mess the algorithm up because the algorithm was created on evil and lies and it’s gonna mess the algorithm up. It was created on evil and lies and murder, rape, and pillage. Like, I mean, in life in general and on the stock market.

Dr.M (59:04):

Oh, gosh. Yeah.

The Kusamarian (59:10):

What a great sound effect, there you go. But yeah, I mean, I think what GP brought up, and what you’re touching on a little bit, Titus, is you can’t necessarily take away those bad actors. And what Conce has also reiterated a lot as well is that part of the decentralized model is you gotta take some of those tough things and be able to learn from them, adapt, and make your product more robust. And it sounds like, you know, with the decentralized system, that is exponential in terms of the number of nodes you have, the more people that are contributing to that system, the stronger and maybe, the stronger the system is overall and maybe the quieter those bad actors become.

Jacob Steeves (59:52):

It, you know, the technology inherent to machine learning is imitation. And so in many ways, the people that we use to train it will be reflected in the technology itself, right? So the people that, and then the question is who gets to choose, right? Who chooses the data? And that’s kind of where we start our, you know, our journey with BitTensor, right? So, yeah.

The Kusamarian (01:00:22):

Yeah, that’s a good point. And that’s something I, you know, I think a lot of the people in the Polkadot ecosystem may echo in terms of the, you know, the tech reflecting the people that are within it too. I think we see that a little bit, right? Like Polkadot’s focus is interoperability and a lot of these teams in this space really work together. And it’s almost like the tech and the people have found themselves in a way. I don’t know if that makes sense, but something I’ve seen to notice a little bit there that, you know, the product attracts a certain type of person in certain groups. But GP, you have your hand up?

GP (01:00:60):

Yeah, thanks, man. Yeah, you know, that’s a core area of my interest, which is the bias of the training data set being reflected through the bias of the people who are providing the data for the training data set. And it’s one of the real weak links for either poor or negative outcomes or false positives. I think Dr. M has heard me mention this before. And, you know, the quality of your output is only as good as the quality of your input. I think the point that Titus made in terms of, you know, bad actors in the system and the personality of the individuals or organizations who create these systems is very much like the relationship or the personality that a dog has based on the first six months of its life and reflecting its owner’s behavior. AI is very like that in terms of it takes on the personality of the people training it. You know, the training data sets that Google have is basically the entire world population who use it. Same with Facebook and all the facial recognition and the NLP programming and, you know, the annotation of our images has basically been free consulting that we all willfully or willingly provide to them to annotate the pictures to help train their AI. Those type of resources in the context of what is a very small ecosystem now in Web3 are not as available. So one wonders about how swiftly the AI and so forth can evolve within the blockchain considering the number of users of the blockchain right now. But, you know, I agree with Titus. I think the invention reflects the personality of the inventor.


And we’ve got to be very careful about the quality of training data that we provide. But yeah, you know, I have other things to say about that, but I just don’t want to go on monologues. But yeah, be careful.

Titus (01:03:05):

Please be careful.

The Kusamarian (01:03:08):

Alpha, did you have something to add there?

Alpha (01:03:12):

Yeah, yeah. I mean, I think that’s why I was talking about, you know, you want to keep it open source, but I do think that like Gov2 or even something like what FileNetwork is doing with, you know, the tracking using soulbound tokens or NFTs to track an identity without losing your anonymity. For example, it’s like walking down the street in New York City. People know you’re there, they walk into you, they might even bump into you, but they don’t know anything about you, right? They know how you look and that’s it. So there’s a certain level of anonymity we just accept as a populace, right? The same thing can happen in blockchain. And so I do think, you were talking about 90% of anonymity is kind of lost because you have to go through some kind of KYC process, right? That’s correct. This is why they want to ban mining or any kind of like, this is why the, I really do believe that things like Bitcoin or like just general mining, the ability to generate some form of transactable value outside of the system is super dangerous because it keeps your anonymity. If there’s, if you have a, just a random address that has a miner that is generating Bitcoin and Ethereum, that is transactable and now has value, that is dangerous because they don’t really know who’s doing it or how they’re doing it, who that person is. And then you can take those Ethereums and use them wherever you want. So that keeps your anonymity. I do feel like we were moving in that direction, which was one of the most bullish things for crypto in my aspect. But we get into this danger now where it’s like, because of all the tools and amazing ability that crypto and blockchain bring to the table, you need to have some form of DID to at least have some kind of on-chain identity without ever giving away your real identity.


And I think that’s something that BitTensor could, you know, apply in a way through regulation. You said you were pro-regulation and obviously keeping things. Well, I think that would be like a really interesting discussion to have. I’m okay with the endpoints regulating themselves

Jacob Steeves (01:05:18):

and choosing the regulation that they want. I want to keep regulation out of the base layer as much as possible. That’s, I think should be free. Yeah, and so you’re using decentralized governance

Alpha (01:05:32):

for that, right? Yes.

Jacob Steeves (01:05:35):

And, you know, we haven’t really figured out how we can do that. So you’ve raised a lot of interesting points. And we could definitely use a DID system. And there’s a lot of really interesting ideas in that space. It’s very difficult, you know, to get soul-bound tokens onto a chain. You know, in a way it’s like, it really defeats the power of crypto, you know, in some sense when people become super KYC’d. And I really think that, you know, as much as possible, we should fight for, we should figure out ways of doing it without needing identities on the chain.


It may be a requirement though. And, you know, like it may just be that the regulators come and really, really pressure us super early to make it impossible to do things like governance without doxing our community. We have to fight that battle, I think, pretty soon. Yeah, I mean, like I said, it all comes down to,

Alpha (01:06:33):

if you’re never having to part, if you’re never ever having to generate anything in the system, you never have to use a sect because you don’t use dollars because you mined all of your coins, then you become a very anonymous person. Yeah, and I think that that’s great.

Jacob Steeves (01:06:50):

I really want to promote that as much as possible. It’s freedom for the people that work with us. Yeah, I mean, I think that that’s great. It’s freedom for the people that work with us.

The Kusamarian (01:07:02):

Yeah, I agree there.

GP (01:07:06):

And GP, you want to add that? Yeah, thanks. Doesn’t that alpha AI depend on the device you’re using though because of the complication of anonymity? Your entry point to crypto, okay, may not be KYC AML, but you’re on a digital device, which is profiled, which is trackable. You know, there are multiple layers of breadcrumbs and digital footprints you’re leaving before you get to the point where you can, you know, become an anonymous user of crypto or blockchain.

Alpha (01:07:41):

Yeah, that’s why they had to take Tornado Cash out, and that’s why the founder of that guy is in jail, but SPF isn’t, you know, that’s what’s really worrying. It’s the fact that, and you know, everyone wants to talk about Tornado Cash was bad for some reasons, right? But there’s a reason why that guy’s in jail and SPF or people like SPF aren’t. It’s because the less anonymity you have, the least powerful you are. So, I mean, all of it plays in. There’s strength there.

GP (01:08:13):

Yeah, I totally agree. I mean, they got their jail sentence extended by a further three months until they solved his case.


Yeah. Corolla or something to pay for his defense. Well, as you say, SPF has given interviews with the New York Times, so that plays exactly to the narrative that we’re all aware of here, which is if you’re the small guy with the innovative idea who’s got something that’s a threat to national security, you’re immediately gonna be put behind bars. But when you’re SPF playing with the big boys, actually playing a three-card trick to probably accelerate regulation, you’re given, probably you’re getting puff pieces from the New York Times and have the absolute cheek to be still tweeting that you’re trying to learn what happened. I mean, how stupid, or sorry, that reflects how stupid these people think we are.


Luckily, that’s not the truth. Khanstan and Alpha AI and Titus, so over the last four years, as a result of our work in other areas, we put together a four-part solution, which we, I don’t wanna mention the name of my project, if that’s against the rules of the floor.

The Kusamarian (01:09:32):

That’s all right, go ahead, Ben.

GP (01:09:33):

Yeah, so at the answer, we have combined digital identity with what we call TAMI, the Answer Access Method, which is based off two patents I hold in a blockchain data processing protocol and virtual blockchain access method. That gives a very high degree of anonymity and also gets rid of federated and centralized control, which are typically two of the major reasons why we don’t have anonymity or control over our data in Web2. It also, Titus and Alpha AI allows the person with the digital identity to proxy their identity or the elements of their identity that they wish for whatever service that they’re choosing to subscribe to. Also takes away the walled gardens and gated silos.


By extension, the power that they hold, retaining all the metrics and analytics under the digital identity, which can be represented as an NFT, that’s sitting alongside of that within the digital identity and the access method. The user themselves is employing big data analytics and AI tools, AI tools in particular to scan physical objects to take their unstructured data and make it structured data and use analytics to perform operations against that data, which they can then sell as opposed to give to, for free to organizations. So as opposed to you being profiled, you profile yourself and then can sell the specific elements or monetize the specific elements of your own behaviors to organizations who have value or put value upon that.


This is manifested in the real world by dApps, a dApps which you can sign on to, using centralized and federated logins if you wish to, or with DIDs. And what it says, the timeline is to migrate, or the objective is to migrate these tyrannical Web 2 processes to Web 3 over a period of time to result in the achievement of what we’re always talking about, which is mass adoption, but without having to do the mass education at the same time. It undermines federated identity, walled gardens, surveillance capitalism, the erosion of civil liberties, surveillance capitalism, mass surveillance, abuse of power, privacy breaches, unauthorized monetization, and gated silos. And finally and foremost, underneath all of that, it supports the liquidity within the coins infrastructure, crypto infrastructure, by not allowing it to be only subject to rampant speculation, but also to have the coins be used for the exchange of product and services by people in the real world who know nothing about the underlying technology. In other words, abstract.


Totally the technology of Web 3 from the use cases at the front end. And with that, elements of that can certainly be pals of the attempt to keep the AI on the blockchain experiment away from the prying eyes of those who would seek to undermine it.

Alpha (01:13:05):

Man, I have a friend named Bruno. I think you guys would get along. He also hates the United States. There’s always a wager, gentlemen.

Titus (01:13:13):

There’s always a wager between good and evil. So those both people have to fight their own fight, the good fight, the bad fight. And like I said before, in what I told you about what I’m into, in faith in God, I’m not sure if everybody believes and it doesn’t really matter. It does, but I’m not in control of that, of course. But in faith, I do what I do in faith of the algorithm, in faith of the good, not the evil, the good, not the bad. So the anonymity of yourself can also be dangerous because whenever you’re behind a block wall, how are you gonna know, you know what I mean, who to look for when you’re scared and running?

GP (01:14:10):

Well, just to clarify something, I don’t know who said it, but I don’t hate the United States. I hope that didn’t come across in my-

Alpha (01:14:21):

No, it’s a joke. It’s a joke, because Bruno talks about it a lot, but he’s a decentralized Maxi as well. And we just joke that way, because everything you named off is very like American, like there’s a lot of corporatism, right? So it’s just, it’s a joke. Don’t, I didn’t mean any offense.

GP (01:14:38):

Oh, no, no, I’m not offended. I just didn’t want you to be offended. So yeah, no, no, man, I have a thick skin, bro. I don’t get offended that easily. I just wanted to make sure. It’s not a political statement, anything I’ve said. All it is, is an attempt to, well, not an attempt, it’s been a lot of years of work and a lot of money that we’ve bootstrapped ourselves to try, you know, I’ve been around a while, okay? You know, I was, you know, I saw the Semiconductor when I was a kid on the science shows, on the doom and gloom that people said would follow its adoption. I worked in the early internet, New York City and Tribeca. You know, I was part of the startups that, you know, like the vignette, ATG Dynamo, Haute Couture, you know,, if you remember those guys, in London. You know, I went through the sort of winter of, you know, 2002 to 2008. I then became an alphabet agency contrarian and an advocate for civil liberties, privacy, and the maintenance of our democratic rights when social media started to be adopted. It was clear that Zuckerberg and others were, didn’t have our best interests at heart.


They keenly into non-undermining of encryption and wrote for peer lists until they were closed down. You know, a million infosec professionals globally writing about their stories. And I think these subjects are simply moving, or these challenges are simply moving from that domain into this domain. And it’s incredibly important while we’re all excited about the technology that we understand the realities of the way the world works. Somebody spoke about the good feeding the bad, or the fight against good and evil. I mean, the white wolf and the black wolf, it’s a famous, you know, Mesoamerican, American Indian, Native American, I should say, saying you’ve got to feed each equally in order to maintain balance. But, you know, the fight between good and evil is one thing. The fight between those who seek to oppress and those who seek to maintain freedom is, you’ve only got to feed one part of that debate, which is the people who fight to maintain freedom and resist oppression. And, you know, I think it’s naive, you know, for people to think that this is not going to be interrupted and innovation is not gonna be stifled by pressures that are gonna come from the establishment. I mean, man, they start wars, you know, for energy that last decades, you know, if they think their financial system is gonna be undermined, you know, just imagine what they’re willing to do just to stop that happening.

SPEAKER_02 (01:17:25):

Well, GP, it’s really good having you up here today and you’re welcome back anytime. I’m gonna throw it to our buddy, Credit Card Sages. What’s going on, bud?

SPEAKER_06 (01:17:33):

Yes, sir, appreciate it, Jerry. GP, I was pretty struck by your analogy earlier of the dog and the owner, if you will, and this is kind of one of my bigger concerns on a kind of political and philosophical level and the application of AI is the kind of removal of the, the kind of perceived removal of the human hand, if you will, and for a lot of people, and I don’t know if I could necessarily claim expertise in this area, but trying to gain understanding, trying to gain awareness of very quickly changing the world from a technological level, but many people are just users. Most people just want to, you know, apply the technology. And I guess I have a lot of concern for when this perceived removal of human hand philosophically comes, like, and people don’t realize just how much of the spirit of the dog, if you will, is actually, you know, encapsulated from the spirit of the owner. Those data sets that you were talking about for inputs, like, is the only way to, is the only way to, I guess, safeguard that to head toward Web3 and blockchain? Is that kind of what I’m hearing? Is it particularly with this technology? I guess me just trying to kind of piece through this from a layman’s perspective, but does that make sense?

GPS (01:19:04):

Yeah, sure, Kurt.

GP (01:19:07):

Yeah, I mean, it breaks down to different areas of training data sets. I think maybe just to refine the analogy a little bit, I mean, bias is introduced into training data sets by people who select the training data who have conscious or unconscious bias in certain areas, which has resulted in very bad outcomes for things like AI-run access to social programs in North America, where the AI has been programmed or trained incorrectly and has stopped people from receiving social assistance or social housing or other assistance. And because of the lack of explainable AI or the lack of research into explainable AI, the AI is a black box. So when somebody goes to seek, why it is that they’ve been turned down when they’re entitled to an entitlement, their query can’t be answered. So because basically the person can’t say why the black box has made the decision. So that’s one poor element of training data sets, which is it is not accompanied by algorithms of XAI, where you can explain why the AI has made the decision that it has. The second area is willful input of bias training data. I don’t want to get into a number of examples of that right now because some of them are quite offensive. But the data fed into a system can result in genders or ethnicities being removed from a particular progression path of a selection method, for example. The third, and there are others, but the third and final is the false positive training data. And a good way of using XAI or certain elements of XAI is to use heat maps to ask the AI, how did you make this decision? And there’s a good example of it in the context of a project that was used to identify, and Dr. Willowtherby spoke about this before, pictures of wolves in the wild versus dogs.


And the training data was fed into the system and the AI had an almost 100% accurate choice of wolf versus dog. So as far as they were concerned, they had a successfully trained AI algorithm which could, to nearly 100% degree of accuracy, identify a wolf in an image. However, when you apply heat maps to that, it turned out that the reason it was identifying it as a wolf was that all the training data that they provided for wolves had snow in the background.


And using reinforcement learning, it wasn’t recognizing the wolf. It was just basically saying that this particular type of animal in the training data set is basically surrounded by snow. So the AI, people thought, is identifying wolves whereas the heat map shows it’s making its decision based on the presence or absence of snow. So that’s how training data sets can bias decision-making in various different scenarios and why it’s important to make sure that XAI travels alongside AI training data sets.

SPEAKER_02 (01:22:38):

That’s totally wild. Nice to see you, man. Haven’t seen you in a while. What’s going on in your mind?

SPEAKER_03 (01:22:45):

Hey, man. Good to see you guys. Great to be back on stage, man. I’ve been traveling, work, whatnot, but I’ve been missing my summer nights, man. I came in and it was a heavy conversation. I just jumped in in the middle, so I don’t want to interrupt or take us off track, man, but I’m glad to hear this, man. It is something that is, I’m enlightened by it. So just chilling, man.

SPEAKER_02 (01:23:12):

Sweet. Well, it’s great to have you back. We’ve actually been off for a couple of weeks, so this is a weird conversation to come back to. This is heavy. We actually had to let Paulus go, which is fantastic, but maybe I kind of wanted to circle back to the beginning here. GP, by the way, are you validating on the BitTensor network? Not right now. No. So you’re just here, like, you’re intrigued by the project?

GP (01:23:42):

Yes, I’m here doing due diligence, sir.

SPEAKER_02 (01:23:45):

Okay. All right. Very nice to have you here. Well, I’m wondering, maybe Dr. M there, because earlier in the conversation, Const was kind of talking about empowering, like, you know, the people who own the network together. Here in this conversation, how do you take the responsibility of building this network supporting AI, and how do you see you as an individual can play a role in kind of the outcome of all this going forward?

Dr.M (01:24:16):

What a great question. Yes, you know, there is actually tremendous autonomy in the BitTensor network in the sense that, well, the reality, you know, currently is that I’m really just taking some of the, you know, well-performing models, pre-trained models released by other companies, and fine-tuning those to be better, and running those on BitTensor is the current reality. But intrinsically, you have pretty much autonomy in terms of what you can choose, you know, what you want to train or basically give BitTensor. You know, it’s so long as, I mean, for the moment, at least personally, I have a hard time imagining that anyone would have any ill intentions there in terms of the training of AI, but that’s, like, you know, definitely something that, you know, to think about or a concern that can come up because, you know, BitTensor needs to be the way that it is in the sense that basically each node’s model is a black box to everyone else other than the operator of that node.


It kind of needs to be this way because this system drives on incentive and on competition among the people serving models, basically, and it really works, too. You know, BitTensor has shown that what has, you know, been achieved in a year is the pace is just astounding, and I think that if some of the larger AI companies are probably just not paying attention, but, you know, should they become aware of it, they’ll be like, wow, you know, because, yeah, it’s tremendous. So, you know, there is definitely concerns, many of which Konst mentioned along the way.


For me personally, I think that the biggest one is just, and Konst touched on this, it is that, you know, I’m basically in love with BitTensor. It is a baby, relatively speaking, and it is a tiny network, and I care so much for it, but I also know that there is, you know, big, potentially ill-intentioned forces that can align against it, and that’s kind of my biggest worry, but, you know, internally within the BitTensor system, there is, you know, what you’re doing as a node is basically, you know, what you’re training or what model you’re using or really what you’re feeding BitTensor is kind of hidden from everyone else, and it needs to be that way because it’s the competition that drives the innovation here, and really, I mean, in a way, BitTensor is set out to prove and has done so in many metrics and ways already.


It is set out to prove that this sort of a way to innovate in machine learning is more efficient than the established paradigms of developing ML, which is basically two ways, right? You either have industry giants doing it or you have academia, and I think that, you know, well, what I hope for is that BitTensor will ultimately succeed and that these concerns will not materialize to become ultimately a roadblock, in which case, surely, it will be, you know, widely known that this is the way to do AI, and we’re going to see, I suppose, the equivalent of, like, Ethereum and Generation 3 smart contracting platforms following what BitTensor has done.

The Kusamarian (01:28:12):

Yeah, that’s great. That’s great stuff. You know, I think that kind of keys into one of the powers of blockchain, right, is to give the… Keying into what GP has mentioned in the past as well, you know, it gives the little guy a little bit more power, right, because you have this decentralization, you have this blockchain, which is that truth machine in the background driving the incentives, and, you know, if enough people buy into it, over time, it becomes censorship-resistant, right? And so, you know, BitTensor is doing some incredible things, and I did have one more question, if maybe one of Dr. M, maybe, or GP, if you guys may have an opinion on this.


You know, I think I heard Alaa in one of your spaces, Dr. M, which are great, love listening to those, mention that, you know, BitTensor is not to the point yet where they have surpassed open AI, you know, with this advent of chat GPT. Of course, this has been really great, but from an infrastructure standpoint, I guess, what would be that point where, you know, they think that they have either, maybe not surpassed open AI itself, but are the leaders in that space?

Dr.M (01:29:30):

Gosh, that’s a great question, and one that, you know, I, in fact, asked Konst at the last one of those spaces, because I was really curious, too. I mean, it’s clear to me that BitTensor is not quote-unquote competitive yet, but I don’t want that to be mistaken for what is done, because really, you’re looking at a much faster pace of development than basically can be done any other way.


So, not to diminish that, and that’s extraordinary, and that’s why everyone who’s involved is so excited about this, but basically, to answer the question, Konst said in about a year, is what he said about two or three weeks ago when I asked him the same question, that, you know, when will BitTensor be competitive? And you know, with AI, it is the nature of AI that basically, up until, you know, you’re, say, the best or near the best, you’re going to see basically no use. No one’s going to be interested, because, you know, whatever, if I can use GPT-3, why would I go and use GPT-2? Now, they are by the same entity, but, you know, you get my point. But then the moment that you’re there, then all of a sudden, you go from no interest to all the interest in the world. So, you know, we look forward to that, of course, but, yeah, so a year would be two years from the beginning of this system, which is kind of incredible, if you think about how long it’s taken to get to GPT-4, let’s say. So, exciting times, the worries along the way, of course, but I just, you know, really feel that democratization and decentralization of AI is really a cause worth fighting for, and that people should generally want AI decentralized, and if they don’t, you know, yell out wanting this is because they’re likely just not aware of all the ways that already we are being manipulated in all sorts of ways from centralization of AI.


I think that’s kind of, you know, my summary of the sentiment here.

GP (01:31:46):

Yeah, totally agree. Kusumari, in my view, when they will overtake OpenAI, well, you know, they’re coming up on their second anniversary, we will see a point where a word of mouth and the tipping point occurs, and because of the democratization and the involvement of the little person and the lower barrier or the no barrier to entry, as opposed to the massive barrier to entry to contribute in other domains, I expect that that one to two year estimate that Dr. M was given by Const is pretty on point. And I think once that happens, I think they’ll just speed away into the distance. That’s my take.

The Kusamarian (01:32:40):

Outstanding. I really appreciate you guys coming up here. This has been a great, great conversation with some incredibly enlightening things in the AI space. I think everybody’s really been enamored with GPT-3, right? And, you know, seeing a project like BitTensor come to Polkadot and share a lot of the same values and expand on something that’s, you know, a part of our future, it’s incredible. It’s exciting to talk about. So I appreciate you guys coming up and Const, if you catch the end of this, appreciate you too. Thank you for coming on. And, you know, we’re looking forward to seeing you guys more often, feel free to come around. And, you know, we’ll be having this every week for everyone else, every Thursday around 9.30 Eastern, we’ll be hosting these spaces, talking to different projects. And a lot of times we open it up to the crowd to kind of talk general Polkadot, but I think this is a good place to end it. You know, this has been such a great conversation. I don’t know where we really go from here, right? So thank you guys. And we will see you next week.

Dr.M (01:33:50):

Much appreciated. Have a good day.

Video Description


This Clip was recorded on The Kusamarian’s Twitter Space Chat on Dec 8, 2022

Host: The Kusamarian Co-Host: Jacob Steeves (Const)

Subscribe to the The Bittensor Hub for everything Bittensor!

Bittensor is an open-source protocol that powers a scalable, globally-distributed, decentralized neural network. The system is designed to incentivize the production of artificial intelligence by training models within a distributed infrastructure and rewarding insight gained through data with a custom digital currency.

Discord: Website: Whitepaper: Network Map:

Socials: Bittensor: The Kusamarian : Jacob Steeves (Const):




Podscript is a personal project to make podcast transcripts available to everyone for free. Please support this project by following us on Twitter.