**A** (0:07):
So today, now I'm just going to go ahead and like kind of pick up off of a bit from like Jay's talk from yesterday. Jay gave like a really good general overview over Cosmos and like interchange communication. So this is going to be a much more like technical talk I'd say on like ibc, which is our inter blockchain communication protocol. And it'll be showing a bit of some of our latest thinking on how this entire process is going to work. First off, to overview a little bit of what Jay said, why interoperability. The Internet was basically created to connect these independent and relatively isolated networks and create this standard protocol for that will allow these independent networks and siloed networks to communicate with each other. And this is what the Internet protocol was or ip. And we have this, Cosmos has this tagline called the Internet of Blockchains. And it's more than just this marketing term. It was a very specific design goal that we do. And a lot of our designs for Cosmos and IBC are very heavily inspired by, by the design of the traditional Internet protocols. So why do we need this interoperability? The main thing that we're trying to go for here is there is sort of assets are on their, on different blockchains and each blockchain has like these isolated innovations that they're working on. And there's no way for me to use the innovations of another blockchain with the assets of a current blockchain. One common question is like, you know, is this similar to atomic swaps? And so the difference between atomic swaps and IBC is atomic swaps is just two people exchanging tokens in a trustless atomic way. But it doesn't actually allow assets to move between chains. If I have like $5 worth of bitcoin, atomic swaps allows me to get like $5 worth of ether easily. But it doesn't actually let me move my bitcoin onto the Ethereum blockchain. So the problem with P3 relay was it's very EVM specific, just sort of a hackathon project basically. And it didn't work general purpose across many chains and it doesn't handle complex timeout and error handling. So for example, let's say I took my ETC and put it on the like locked it up in the Ethereum classic chain. But the Ethereum blockchain is completely censoring me because I don't know they don't like me. Then what happens is my ETC is stuck in this in transit limbo process. And it's like it doesn't exist on either the ETC chain or the Ethereum chain. There's no way to like get it out on either chain. So we want to have some sort of timeout mechanism so tokens don't get locked up in transit by accident. It also didn't define common standards for how to parse packets. It was very specifically designed for this one use case that we had in mind. I think we hard coded in the Ethereum classic addresses we were trying to use. And the final thing is that like clients for proof of work are very computationally difficult to implement on chain. So you know, we didn't actually do that. The like client proof, we kind of just as far as the Hackathon project kind of just assumed that the header was correct. So you know, the benefit of this entire thing is that these Tendermint light clients make it very easy to do these on chain verifications. You it's basically verifying a tendermint header is almost like roughly as complicated as verifying a multi sig. It's about the same computational overhead. So yeah, now let's head into ibc. So basically we have this ability to write these efficient on chain light clients. And so we have two chains, chain A and chain B. And you want to. Now chain B is running an on chain light client of chain A. The question is, how does it actually get to know the blocks of chain A in a light client situation? You know how I get it is I'm asking a full node like, hey, I'm pulling the full node like hey, give me the latest block. But you know, a blockchain chain B doesn't have the ability to go out and ask a full node. So what we instead have is we have these relayers which are basically people who are incentivized to relay headers over from one chain to the other. So a relayer will see a header of chain A and send it and push it as a transaction, say to chain B. It will run a transaction saying, hey, this is the latest header on chain A and B can you know, get this header. It can verify it and say, okay, yes, these signatures are correct. You know, maybe if a relayer is like sending over false blocks, it can like punish it somehow. But like, you know, the relayer is paying a fee in order to put that transaction on chain. So maybe we don't need that. So yeah, this like idea of these like, and you know, this same thing could go the other way. Chain B could also. The same relayers will probably be incentivized to Move headers from chain B to chain A. And so our abstraction of this is called an IBC connection. It's basically a two way block header transmission buffer where you send the block headers between the chains. So A is keeping track of the headers of B and B is keeping track of the headers of A. A single chain might have IDCC connections to multiple chains. So for example the Cosmos Hub has like connections to many other blockchains. We'll get into the Cosmos Hub later. And so basically what it will be doing is it'll each change, keep a mapping basically of chain IDs to the latest Trusted Validator hash. And you know, you can send these headers over it and over a header, once the other blockchain has a header you can send it some sort of proof carrying data. And so you can use these IBC connections. The purpose is to like move these headers back and forth but the packets actually don't go over the connections directly. We have another abstraction called IBC channels which are these one way buffers basically in golang you could like it's yeah, they're called channels, right in channels where you can, it's a one way thing where you put something in and it comes out the other side. Using the Internet analogy, you can kind of think of these as similar to network ports. So you know, I may have a connection, IP connection to your computer but you know, we don't shove everything over one connection. We have these different ports and these different ports can be communicating over different protocols. Maybe one of our port pairs is communicating over TCP while another one's communicating over udp. They can be talking about different things. And so the reason we want the ability to have these many channels is that with multiple channels we want each channel to basically relay headers in sequence. But you know, not everything needs to be done sequentially. Maybe I can have a specific channel that's designed for token transfers while another one is designed for governance transactions. So I want to send a vote from one blockchain to another blockchain that doesn't need to be like totally ordered with my token transfers. So you can use these multiple channels for some parallelization. And like I said, each of these channels could have different operation modes. So you know, UDP style. If you're sending data from one blockchain to another, you don't really care if it doesn't get like get to the other side. You can just like if it doesn't get to the other side a user can observe that and just like rebroadcast it again and again until it gets through. But you know, maybe there are some things that you do want guaranteed transmission. So if you want to go in, if you want to start building like sharding, like systems where you want thing you want like atomicity between of transactions where like, you know, you make sure it gets executed on the other side, then you do want to start using some like TCP style channels. I'm convinced. I'm not, I haven't like modeled this yet, but I'm pretty sure we can like model like all of the different forms of sharding and like transfers and stuff, like by playing with what the definition of timeouts are and what the failure mode of timeouts are. So Vlad talked about his sharding today earlier. I'm pretty sure his sharding is basically just this in which you don't have timeouts and you just say we will halt everything until we get a return message. And then the final thing is just about hubs. So we were giving this giant Internet analogy and the last piece of this is how do these ibc, what is the topology of the network of blockchains connecting to other blockchains? Theoretically you could have like every blockchain connecting to every other blockchain in a very like mesh network like way, peer to peer. But you know, we've seen in the Internet that this just generally tends to not happen. It doesn't scale very well. The Cosmos hub as well as other hubs that emerge in this ecosystem are basically the equivalent of the Internet service providers where they basically say that, okay, instead of every zone having to connect directly to every other zone, they can all connect to one blockchain and we will connect you to everyone else. And this is really good because instead of having N squared connections where each blockchain connects to everyone else, you can have on the order of N connections where everyone connects to one hub and then the hub connects everyone to everyone else. And this isn't. It may sound a bit like centralization, but the Cosmos hub is supposed to be, and other hubs are supposed to be these decentralized distributed systems. And the other thing is that hubs also provide other services. So you have your ISP and they provide you services like the routing basically. But oftentimes your ISPs also provide secondary services like cloud hosting or DNS resolution. And so over time these hubs will also evolve to offer these separate services as well. So a hub may say like, okay, we'll do shared security, where if you don't want to bring your own validator set, we'll provide you a validator set, or maybe they'll provide plasma functionality where, okay, you have your own validator set, but we can be this arbitration system for you or DNS systems, identity systems. There will be basically competition between hubs in order to provide the best services, basically. And we're not going to do that. Cool.