sunnya97.com

Cosmos vs ETH 2.0

The discussion centers on the differences between Ethereum 2.0 and Cosmos, focusing on their architectures, consensus mechanisms, and approaches to scalability and inter-chain communication.

Summary

In this discussion, we explored the intricacies of Ethereum 2.0 and Cosmos, delving into their respective architectures and consensus mechanisms. I shared insights on the upcoming phases of Ethereum 2.0, emphasizing the transition from proof of work to proof of stake, sharding, and the challenges of data availability. We also dissected Cosmos's approach to interoperability and its unique hub-and-spoke architecture, highlighting how blockchains can communicate without becoming siloed. The conversation touched on the importance of security and the varying needs of application-specific chains, while also addressing the implications of validator centralization and the role of exchanges in staking. We pondered the future of smart contracts across both ecosystems and how they might evolve, ultimately pointing toward a landscape rich with experimentation and innovation in decentralized applications.

Key Takeaways

  • The Ethereum 2.0 upgrade aims to address scalability through sharding, moving to a proof of stake consensus, and enhancing execution environments, but faces challenges like data availability and network partition risks.
  • Cosmos operates on a hub-and-spoke architecture allowing multiple blockchains to communicate, emphasizing the importance of interoperability without requiring a single consensus mechanism across all chains.
  • The Cosmos community is focused on providing application-specific blockchains with varying levels of security, which allows developers to tailor their chains based on their needs, contrasting with Ethereum's more uniform security model.
  • Successful implementation of shared security in Cosmos will depend on validators' willingness to stake on multiple chains, balancing the need for robust security against risks of over-leveraging any single validator.
  • The development of non-financial applications and services is seen as crucial for the future of both Ethereum and Cosmos, with potential growth in areas like sovereign identity and reputation systems.

Detailed Analysis

In this discussion, we delved deep into the distinctions between Ethereum and Cosmos, particularly focusing on their architectural approaches, scalability solutions, and the future of decentralized applications. A recurring theme throughout the conversation was the importance of interoperability among blockchains. Danny Ryan and Sunny Aggarwal articulated the necessity for these ecosystems to communicate, emphasizing that as the number of blockchains grows, so does the potential for siloed systems. This calls for robust protocols like Cosmos' Inter-Blockchain Communication (IBC) and Ethereum's evolving sharding strategy to ensure seamless interaction and data availability across platforms.

This dialogue is set against the broader backdrop of the evolving blockchain landscape, where scalability and usability remain pressing challenges. Ethereum's transition to a proof-of-stake model and the introduction of sharding signify a critical step toward addressing its scalability issues, particularly as demand for decentralized applications continues to surge. At the same time, Cosmos champions the idea of application-specific chains that allow developers to create tailored environments for their needs, which contrasts with Ethereum's more monolithic approach. This divergence in philosophy not only highlights the innovative solutions emerging within the space but also the differing priorities of their respective communities.

The implications of these discussions are profound. As we navigate the complexities of decentralized systems, the conversation about security, governance, and economic incentives becomes increasingly relevant. Both Ethereum and Cosmos are experimenting with governance models that empower their communities, yet they also face challenges related to centralization and validator distribution. The concerns raised about exchange centralization in staking mechanisms are particularly salient, as they could undermine the core ethos of decentralization that both ecosystems strive to uphold.

One of the strengths of this dialogue lies in the candid acknowledgment of the limitations and challenges both Ethereum and Cosmos face. For instance, while Danny and Sunny extol the virtues of their respective technologies, they also recognize the need for continued experimentation and adaptation. The conversation's critical examination of issues like data availability, validator distribution, and security models showcases a nuanced understanding of the intricacies involved in maintaining decentralized networks.

This video will be most useful for developers, blockchain enthusiasts, and investors who are keen to understand the current state of the blockchain ecosystem. It provides valuable insights into the ongoing evolution of Ethereum and Cosmos, equipping viewers with a clearer perspective on the choices they might make in building or investing in decentralized applications. By highlighting both the potential and the pitfalls, the discussion encourages a more informed approach to navigating the blockchain space, fostering innovation while remaining vigilant about the challenges ahead.

Transcript

Speakers: A, B, C, D
**A** (0:09): Very quickly, let's go through speaker bios. So do you want to start, Danny, because you're a guest. **B** (0:14): Sure. Thanks for having me. I'm Danny Ryan. I work with the Ethereum foundation generally mainly on like the Eth2 project, which is major upgrade of the consensus for Ethereum into a proof of stake and sharded blockchain. I do some research, spec writing and a lot of coordination on the project. So I work a lot with client implementers and other people kind of touching the project. So I have a good, like high, high level overview of everything. **A** (0:45): Are you kind of the Sunny of Ethereum? **B** (0:49): The what? Sorry? **A** (0:50): Are you the. Are you the Sunny of Ethereum? **B** (0:55): Yeah, maybe. I don't know, we could spend some time discussing that we might have a similar kind of role. And by role I mean, like there's a ton of shit to do and there's a certain, like, scope of it that I do. You know, the definition of the role is a little bit loose. Just kind of make sure the whole project keeps moving forward. **A** (1:16): Yeah. **C** (1:17): You know, Danny, the first time I met you, like probably two, three years ago, I think I was watching Iron Fist the same like while, like at the same time and when I met you and in my head I still call you Danny Rand. So I'm sorry about that. **B** (1:31): That's fine. **C** (1:33): Yeah. So, yeah, my name is Sunny. I'm a one of the core developers and researchers at Cosmos, working on the project for about three years and do a lot of, you know, work with the Cosmos SDK development, but also sort of research and future roadmap planning. So figuring out writing specs and designing the next version of Cosmos for the next five years. **A** (2:00): Yeah, I would say that Sunny's specialty is doing crypto economics and yeah, practical applications of that kind of research on blockchains. And so for myself, I go by Django Unchained. That's my real name. And yeah, so I've been working at Cosmos for like three years in the blockchain space for six years now. So I'm like a, like an old lady in that, in that regard. But the stuff that I'm working on is usually like content and building ecosystems in a decentralized way. And so a lot of incentive structures and I guess a little bit of mechanisms go into that. So, yeah, right now you're partaking in one of those efforts, which is online meetups. Yeah. Okay, so we're just gonna dive right into the questions, Danny, just to get you caught up to speed, we like to start these meetups with each of the representatives Steel manning the other's project. And so if you were to put on your Cosmos hat and you were to explain Cosmos like, like the audience is five, then that's what I do. And likely Sonny's gonna start and put on his Ethereum 2 hat and then talk about what he thinks Ethereum 2 is about. So do you wanna start? **B** (3:32): Yeah. Yeah. So I actually was thinking about this on the hike today because I noticed, I noted in the email I got that may be one of the things that we talk about. And I decided to, instead of researching Cosmos more to explain it. As I've picked up in conversations and content and things over the years, I can't say I've ever done a very deep dive, but I think I kind of know what's going on and it might be wildly incorrect. So we'll see. Okay, so Cosmos is founded upon the premise that there's going to be many blockchains. I don't know what the guess is, but call it probably more than 50 relevant blockchains, maybe on the order of thousands. And in such a world, we want these blockchains to be able to communicate and interrupt because siloed stuff, like, you know, that's like, that's, that's Web two siloed. Like not where we're going. Like we're trying to build this like open, decentralized thing. So in Cosmos, we're going to build blockchains tooling and infrastructure and protocols to ensure that these hundreds or thousands of blockchains don't become siloed and can interrupt in a, you know, awesome and fantastic way. So to do that, there's a handful of things that we're going to need. First of all, probably is like, you know, how are these things going to communicate? If we can't agree we can't speak the same language, then we're not going to really get that far. And building ad hoc solutions for communication on like an application application basis or blockchain blockchain basis isn't going to be what we want to do. So instead let's see if we can think about the grand scope of the things and generalize into like certain structures and communication protocols so we can solve the communication problem. So I think that's ibc. Then we have this problem where we have many blockchains that want to talk to each other. And understanding what's going on in another blockchain can be, can be difficult. So even if you have like a general purpose programmable blockchain, you still probably have to like program into it in some smart contract layer to understand about like finality or blockchain headers. And you're seeing some of this on Ethereum today, for example, like the tptc, like they're updating block headers and making claims about Bitcoin, but it's on again like a very ad hoc basis. So if we can instead build I think what's called a hub, we can, we can handle all of the complexity of things linking into the hub. So we handle like on a single blockchain basis instead of like on an N squared basis and will facilitate communication. As long as I can know what the hub's saying, then I can know what the hub's saying about other blockchains. So then comes the actual blockchain portion of Cosmos is to build these hubs, build this blockchain that can make claims about the state and probably what we'd normally call like a finalized state or maybe in a proof of work chain, like a certain depth of difficulty, make claims about that. Thus if I can make a claim about, if I can understand that hub now I can make claims about those other chains. And the way that I would then make those claims is via in a nice way is by using that ibc. There's also, I think there's an SDK and I think that that has to do with if we are following the premise that there's going to be many blockchains, let's make it easy to build these blockchains and let's make it easy for them to plug into this communication protocol and hub that we built and thus the SDK to build that out. So Cosmos also uses crypto economic like byzantine fault tolerant mechanisms to have a proof of stake protocol and to finalize things and to have accountable safety similar to I think all secure proof of stake avenues. And so there's validators and there's tokens and all that good stuff. And I think there can be many hubs and maybe there's like a main one right now that has a lot of value behind it. But you know, people can send these up in a generic way. There's probably a ton more there. **C** (8:10): But that was amazing. That was spot on. I think that was better than like most Cosmos explanations I've heard. **B** (8:21): That'S based off of conversations with you and like Ethan and stuff over time rather than me reading technical content. So I hope I did it. **C** (8:31): That's awesome. Yeah, that's perfect. **B** (8:35): There are some interesting things that I was thinking about today and we can get into it later, like the security of these communications and like how it essentially degrades depending on what you're trying to communicate with. But that's like some interesting questions based off of some of the things that I think I learned today by thinking about what Cosmos probably is. But we can get in a little bit. **A** (8:56): Yeah, we should get into data availability sometime along this call as we talk about different blockchains and how they talk to each other, as well as how ETH contends with that in Eth2. **C** (9:09): Yeah, sure, yeah. So I'll give my pitch for Eth2. So today Ethereum obviously is the most popular smart contracting platform with, you know, and it's just the. It's definitely the most secure smart contracting platform for sure. With just the amount of hash rate behind it and its native token. ETH has the most value and the amount of composability benefits we get on Ethereum is just great because it's very easy for one transaction to call multiple smart contracts and these smart contracts to work with each other. But the problem that we run into is the scaling scalability of it. So Ethereum currently runs today as sort of a single threaded blockchain. And so what we want to do is fix the scaling problem of Ethereum primarily. So, and there's a couple of ways to do that. And I would say the three sort of main ways to go about doing this are fixing. One is the execution, like moving it from a single threaded system into a multi threaded system, which is what sharding is. The next is changing the consensus protocol to allow for finality, which will, you know, there's a whole bunch of reasons why asynchronously safe protocols are faster than non asynchronously safe. And we can just, you know, it's part of the UX as well, where that comes through scaling as well. It's all part of scaling. Then finally is on the execution layer itself. A shift from EVM only to supporting many more execution environments that can now then begin to support the more complex things that people want to do, such as uploading new zero knowledge circuits and stuff. Things that are just not. Are limited today with the EVM design. Yeah, and so these, a lot of these projects, especially the first two actually have been always part of the roadmap for a long time and they sort of shifted over time. And while they once used to be more orthogonal goals, through the development we realized that they actually share quite a bit of pieces on their roadmap. And so these goals that were initially Sort of separate things. Like you had a Casper team working separately from Sharding. Now these are sort of all part of this like one unified roadmap of Eth 2.0 which consists of these, which consists of these sort of phases of rollout and are designed to be more and with an upgrade path that is sort of more of an opt in system. Because we're making all these large radical changes to the design, we don't want to undermine the platform that already exists today, which is Ethereum 1.0 and all the work that's being done there. So building sort of a parallel system with a easy to use opt in platform for things to start benefiting from the scalability that we're building on E 2.0. **A** (12:40): Is that more or less pretty good? **B** (12:41): And I love that you're probably biting your tongue because I know some of these points we can sit here and debate for a while. **A** (12:52): So was that more or less accurate? **B** (12:56): Yeah, yeah. I mean another, there's, there's like a whole can of worms, like depending on which one of those three things that he mentioned, you open up like, you know, sharding is hard because of data availability. Sharding is also hard because of like the way that you were trying to leverage the security of like one system, the entire system, but just subset of the security on each and through random sampling. And so there's, there's a lot of like hard problems baked in there and things that we're working on. But I think from a higher level, I think you can captured it. **C** (13:26): Yeah, sorry, I didn't go much into the details of Sharding. That's probably one of the most. **B** (13:30): I didn't go into the details of Cosmos, you know. **A** (13:35): Yeah, just for. Just for the sake of the audience. Explaining data availability might be good context so that everyone's on the same page. **B** (13:46): So in these like single blockchain paradigms, we generally just take for granted that the data is available, that you have this one chain, there's not that much data and you're gonna be able to get it when you start splitting it up and having much higher, higher data requirements on the system and as much like higher data throughput on the system, the assumption that any one actor in the system is necessarily checking, ensuring that there's data, that the data of each of these components of the system is there and available. Even though it's a unified consensus, once there's say like shard 10 I almost like there's claims to kind of this global consensus that the data is Valid and available. If I don't go and check that myself then like the if for if someone's withholding data or the data actually never, never was there, then I can kind of like break and halt components of the consensus. And so there's a number of like both mathematical and crypto economic techniques that we use so that there are. There's kind of crypto economic claims and mathematical guarantees that even if I that the data is available and valid, even though I haven't gone and looked at it all myself, that's maybe enough for the conversation. Sunny, would you add anything there? **C** (15:13): No, I think that covers it pretty well. **B** (15:16): Yeah. And again like this problem becomes hard when you try to have more like for Bitcoin, for Ethereum, potentially for a single Cosmos Hub that maybe, and maybe I'll do have data, data availability algorithms and things in there. But it's not that hard because there's not that much data and kind of everyone that's interacting with the system has the data and you can just readily get it. But when you start having a ton of data it gets harder. Do you all do anything with data availability in the context of a single hub or do you make this assumption? **C** (15:54): No. Yeah, right now we sort of assume, we assume the availability of data and like we assume that the chains that were that are being connected to and more importantly it's the obviously this permissionless, you can create a insecure chain and connect it. Our assumption is we sort of put a lot of this burden onto the users or whatever clients that they're using to only want to interact with chains that are decentralized enough that you get data availability guaranteed, the decentralization of the chain more so than any sort of cryptographic proofs. And this is part of what comes from our belief in application based sharding in the sense where part of for example in the scalability trilemma there's this assumption that's made where the creating of more chains, like more side chains is equivalent to a block size increase. But the part that we disagree with here is that if each of these chains is for a particular application, it's unlikely that any one user is really a user of every single application. And because things are not distributed like evenly I if I am only a user of gap number A, there's no reason for me to be processing the ones for B and C. And so it's reasonable for me to it is within the realm of reason for me to be capable of running a full node for all the applications that I personally Am using. **B** (17:44): Mm, interesting. Yeah. Assuming that you're not bringing in chains that are of crazy magnitude, but even then, like, if you want to run it, you'll run it. Yeah, I mean, I buy it to a certain extent. I'm curious. The burning question today was I guess if I'm interacting, I'm on chain A, I want to interact with chain B via the A Cosmos Hub, then the security of that interaction is then the lesser of the. Either the hub or chain B. Yeah. **C** (18:22): So. Yeah. So when it comes. Yeah. So for ibc, when reasoning about it, I like to separate out like the reasoning of IBC and then the hub. The hub and spoke architecture is sort of like this optional architecture that can. That may arise. And I'm. To be honest, I'm not personally sure if it is the architecture that is going to arise. I think it might be and I think the Cosmos Hub really has to provide other guarantees that like to start providing more sort of shared security like features that will basically be what provides incentive for people to. Is the real value of the hub, in my opinion, more so than the universal connector. But I think the universal connector is also important where you can consider the app the Cosmos Hub as this. Like it is an application specific blockchain and the application that it's specialized at is being up to date with all the IBC protocols. So if there's a new. New chain launches with a slightly different consensus protocol and they've written the IBC integration for it, it would, you know, it would be unreasonable to expect every other chain in the world to upgrade that. But the Cosmos Hub, its job is to do that. **B** (19:42): Yeah. **A** (19:45): Wait, before we dive into this can of worms just to just like zoom back out real quickly, Danny, can you talk about the phases right now of what's planned and sort of the timeline just so we're all on the same page? Because there's been a lot of updates recently. **B** (20:03): Yeah, yeah. And if you want to read more, I just put a blog post out@blog.ethal.org that's called like the state of E2 June 2020. That does go more in depth than I think we might today, at least from a high level to like understand what's going on. But yes. So the Eth2 architecture is really. There's this kind of central system chain called the beacon chain where all the validators live and where the shards kind of connect back into and are controlled via this core consensus. And this, as y' all know very well, launching this is a proof of, pure proof of Stake chain. It's a single chain and what it does is it primarily manages validators, it comes to finality and will eventually link in shards. And launching a single chain proof of stake in production is a non trivial task. So this is divided out into its own phase we call phase zero. And that's really the bootstrapping of this like new consensus mechanism. A phase one is the creation of a number of shard chains. Right now the current specification is 64 that live in parallel to this core system chain in which validators that are in the beacon chain are randomly sampled and distributed across these shards at any given time, building them and also making claims about the availability and validity of the data of those shards, linking them back into this core consensus beacon chain. When things are, when shards link back into this core consensus, they kind of become more final and they become like the beginning of that shard chain. So like if I'm going to go find the head of the shard chain, first I go to the beacon chain, I find the latest cross link as we call it, and then I walk the chain from there. So this in phase one there's actually no execution, smart contracts, that kind of stuff. This is actually purely working on this data availability problem, trying to have 64 shards in parallel and be able to make claims about the availability of a ton of this data and kind of unify it in a single consensus. From there the intention is to take the existing Ethereum chain and instead of having its consensus be the current proof of work consensus, to have its consensus be the E2 consensus as living as one of these shards. At that point you have existing Ethereum living under this consensus and 63 data chains. What we call a phase two is adding execution and accounts to all of these chains such that you have now like 64 what feels like Ethereum today, living in parallel and being able to communicate through this single, single protocol that. So again, phase zero, kind of bootstrapping the core consensus into this thing we call the beacon chain. Phase one, tons of data in the form of these parallel shard chains. Phase 1.5 is the integration of Ethereum mainnet today into this new consensus mechanism. Phase two is adding state and execution to all of these, all of these shards. There's plenty of fun stuff we can kind of do along the way and we're phased in such a way that we can kind of like manage the complexity of specifying and building all this and releasing all of this in kind of a sane way. Because each phase is a substantial engineering effort. **A** (23:43): There's a couple of things in there which is 1 eth 2 consensus and 2 how these shards talk to each other. Those are two of the things that I want to tease out in this call about how E2 consensus is different from tenant BFT and how inner shard communication happens that is different from inter blockchain communication. So your consensus are you one thing? **B** (24:14): Well, Casper is a family of protocols. It's actually probably two protocols. But Casper ffg, Casper, the Friendly Finality gadget is the what will be the core of the consensus to start. And this actually I think looks a lot like Tendermint BFT in that it does have. It kind of follows these more traditional BFT algorithms. It does have this notion of this kind of like 1/3, 2/3 threshold in there and does have this notion of slashing like when somebody distinctly breaks the rules that could cause network faults and punishment for it. And so we have like this accountable safety. I think, and I meant to bring this up when I brought up talked about Cosmos earlier, I think the. There's probably a number of small differences. Certainly is. But I think one of the big things is that casrfg, we would say favors liveness over safety and so the chain can continue to be built without it necessarily being finalized. Whereas Cosmos, and correct me if I'm wrong, generally like has these like units of time where it's like we're all going to figure this out and we're going to finalize this and then we're going to move on. And Sunny, is that a reasonable distinction there? **C** (25:31): Yep, yep. We, we don't make progress until we have 2/3 of validators agreeing on a block. **B** (25:40): Next one, I'm curious in the. Is there one Cosmos Hub or there multiple. I want to say like the main Cosmos. **C** (25:46): Yeah, there's currently one chain called the Cosmos Hub. **B** (25:50): Okay, cool. So the Cosmos hub, I'm curious if y' all maybe early on, maybe have you seen any. And then later on have you seen any points at which you. The chain can't come to consensus and there's these social coordination or there's like, you know, a five minute drop. Have you all seen any of that there so far? **C** (26:09): Yeah, there's never been any situation like that. The only time is when we do an upgrade. We currently the upgrade process on the Cosmos SDK is not. It actually got much better in the current version that's not deployed on mainnet yet, where now it should basically happen automatically and seamlessly. But in the past, especially when you're starting a new chain, you Just got to sit there for like 15 minutes until you wait for everyone to come online. But as soon as a chain has started. In the past we've never had sort of any drops in liveness of the. **B** (26:51): Chain when you do upgrades. So do you like kind of consider it as though you're bootstrapping a new chain with the previous. Yeah, yeah, currently I've been thinking about this stuff recently. I want to pick your brain up. **C** (27:02): Yeah, sure. I mean what we do is at the moment is we export. We export the state. We basically use governance to agree on a block height at which we export the state and use it to export into a genesis file that then act as the genesis file for the new chain. **B** (27:24): Essentially like a state transition. A state transition to get it ready for this. **C** (27:28): Yes. And so currently what we've done in the new, in the current version of the SDK that's on like the master branch of the SDK but not deployed on mainnet is that yeah, you can do these upgrade processes basically automatically where you can basically it'll do this entire process without having to change to deploy it as a new chain. It follows the process, but it will make it do it. So there's no downtime. **B** (27:55): Yeah. **A** (27:57): If, if, if, if you're you. If E2 consensus is using Casper FFG what's the fork choice rule like you know, if you're favoring liveness, you know, how does the, the val, how do the validators decide which fork to go with? **B** (28:15): So what we use is a modified, you know, to our application version of Ghosts which is greediest, heaviest observed subtree in which. So we have, we have what we've finalized. Call that some block here and then we have a chain that's built on it. Generally this is not a very fork filled chain but in the event that it was and you might have a couple of different branches that you have to decide from. Validators are constantly making these boats on the head of the chain and those votes recursively give weight to all prior blocks. And so essentially I start from where I'm finalized and I just follow the heaviest blocks with respect to these branch branch vote. And that algorithm allows me to find the head that has been built beyond what has been finalized. And there's a number of reasons that we've chosen to follow this like kind of safety. I mean liveness favoring one of which is the. I think primarily it kind of fits into this notion of what a blockchain is to users today which is like this highly available continually to build thing and what you do get is you do get this notion kind of like in proof of work. And I can start making claims about what will happen to these blocks, like the likelihood that they will be finalized based upon these votes. Kind of like making a claim that, you know, a bitcoin block three, three blocks deep is probably going to be in there forever, but six block deep really going to be in there forever. So I can make these kind of probabilistic claims. And another is probably there's a handful of them. But another is that the mechanism, in the event that there is some sort of like major network partition, the network is kind of kind of self healing on the order of a multiple, multiple weeks in which validators begin to like lose stake so that the mechanism, in the event that the chain cannot come to finality eventually it kind of these branches that go off on their own assume there's a major network Internet partition in which like half the validators are, you know, I don't know what global there's a lot of. In today's age, there could be any number of reasons that we have these like major network faults across the entire Internet. These chains can continue to be built, built and provide some amount of utility to those that are using them and eventually kind of self heal and finalize and ultimately be partitioned indefinitely. **C** (31:14): Can you describe what you mean by self heal in this case? **B** (31:17): That's like, it's not a very scientific term. **C** (31:22): It feels like self healing usually. What happens in this case usually is one side has to get thrown out and the other side has to. And so doesn't that kind of suck for the side that was making transactions that got thrown out? **B** (31:37): Not necessarily. So when I say that I mean this is like a network partition has happened on the order of three weeks or so in which user activities continue to exist on both sides and both sides can ultimately eventually finalize via the loss of stakeholders from the non active validators on each of those branches. Take for example, there's probably a number, this is like an easy example. But say the Great Firewall goes up real hard and Chinese validators on E2 are existing in a vacuum for more than three weeks on the order of months. This chain can continue to exist for those that can interact with it on that side of the Great Firewall. Similarly outside of it, and eventually I think on the order of many weeks the chains can finalize. And ultimately now have you have, you have two Ethereums, which there are two Ethereums in some extent today. You have, you know, the ledgers would diverge unless there was some sort of social coordination to try to try to do something otherwise. **C** (32:52): Right, I was just going to bring that up where. Why do you want an automated process to do this rather than allowing social consensus to make the decision that, okay, you know what, these aren't going to resolve. Let's become two chains. Because, especially because this has happened already before on Ethereum and you do have. **B** (33:10): Plenty of time for social consensus, right? Three weeks is a lot of time and likely enough time for community members to be like, what the hell? One, what the hell's going on? Two, how is this going to resolve? Three, like can we resolve this before? Because again, if the chains become aware of each other before this time of like both being able to finalize, they'll use the fork choice and they're resolved. Whereas in this event there's, we're also planning for like very extreme global conditions, aka like World War 3 and things in which social coordination might be difficult. You know, it's a. Social coordination exists and on the order of three weeks I think is a reasonable time to coordinate. But in the event that it cannot, that these chains can continue to build, they can still provide utility users. You know, there's, there's like, I think we could sit here and debate this for a long time and it's really like extreme, extreme tail risk scenarios that it comes in. **A** (34:11): But you know, so what happens then? Because you, you have both liveness and finality for East 2. And so let's say, you know, Cosmos, like a Cosmos chain or the Cosmos hub connects to E2, right? Because both have, both have finality and so they could speak to one another. But then suddenly you're, you have a network partition partition and then you have two chains now. And then meanwhile Cosmos is also talking to you. And so how does, how do you. **B** (34:41): I mean, Cosmos, I mean we could, so E2 at that point could take in proofs about Cosmos, right? You could, you could make claims about what Cosmos is making, finalize claims, make claims about what Cosmos is finalized. And so there's potentially like some, some directional utility that way. But Cosmos, if Cosmos is only looking at finalized E2 is going to, you know, it's going to have a. See it as a liveness failure on E2 side, which I think is probably a reasonable thing to do. You could potentially try to make like prob. Claims about the fork choice and things, but it's probably not the path to go. I mean, I know that. I'm pretty sure Cosmos is looking into Supporting and maybe already support proof of work chains. Right. **C** (35:31): IBC does not at this time. IBC right now is focused on finality based things. And then when we do want to talk to proof of work stuff, we just use like A, you know, six 12 blocks deep is good enough. **B** (35:43): Yeah. Okay. Okay. So you could like begin to make those types of claims and people could use them if they felt like it. Similarly, if I, if the, you know, similarly, if the Ether chain continued to be built and was not being finalized, I could also like locally make decisions if I felt that they were safe enough. And that's, that's also one of the design considerations. There is like the, A live chain can simulate a safe chain, whereas the safe chain can't simulate a live chain. And so we kind of consider it the more powerful of the two. But there's a number of trade offs here. **C** (36:19): Have you ever considered doing something where the base layer is more safe and then you can build more liveness favoring things as second layer? **B** (36:32): I mean, you certainly can. Right? Like any, any of these, any roll up chains. Any like any layer two protocols being built on Ethereum could like even today, if the proof of work blocks stop moving, they could still like pass messages and make claims and things, assuming that the chain will become live eventually. So certainly. And in the event that the E2 finality is not live, which I don't expect it to not be live frequently, kudos to Cosmos for being extremely alive. You know, these layer two protocols could similarly make local decisions. And especially if I'm, you know, if I'm in state channels and that kind of stuff like, and we're passing messages back and forth to each other. Yeah, actually there might be some, there might be some dangers that come in there with respect to challenge games and stuff. But yes, layer two can certainly be live in the event that layer one's not. But it's like a kind of a separate consideration. **A** (37:35): So let's talk about inter shard communication. How do the 64 east shards talk to each other? And then we'll go to Sunny about how that's different from how IBC functions. Or if I could be lent to. **B** (37:49): I think it could, I think that the ECU doesn't have to actually care from a core protocol standpoint. And maybe there are certain things like based on this conversation or others that like, you know, if you make decision, this one decision this way instead of that way, it can be like much better. But so we have this beacon chain. This is the core system chain and validators are Always like making these cross links from the shard chains back into the beacon chain. And what that really is, it's really like making a claim about what the head of a shard chain is and making sure that the core system knows about it. And this is good because then the core system can finalize these heads, but it can also all of these shards, they always kind of know if I'm thinking a shard. If I'm running a shard, I always know about the beacon chain as well. So like the beacon chain is like consistent piece of overhead and knowledge across the system. And so if I'm running shard B and I see a recent cross link from shard A into the beacon chain, what that is is it manifests as a merkle tree root. And a merkle tree root is essentially like a digestion of that recent block about data and state and transactions and things. And with that, that digest, 32 byte digest, I can then make claims and proofs about the other shard. So for example, if on shard B I need to reference like the balance of something on shard A, I reference this merkle root and I bring in a little proof, a merkle proof. And now I can like make a claim about what that is and prove it to my applications and to other users. **C** (39:30): The does every single block have to like every block on the shard has to be sent as a digest to the beacon chain? **B** (39:41): Yes and no. So it's always in the optimal. We're cross linking all shards at every slot but the shard chains. Like if shard A has a few slots that are missed, then those three, those three blocks, there'd be a claim made about all of them back into the beacon chain. But the beacon chain would just remember the head of that head digest because that head digest then one it has reference to like historical accumulators as reference to state. You can make claims about previous blocks via being a blockchain. And so that head really is enough of a digest to make the claims that you, you want. So it's, it's always just remembering one reference and that's the most recent cross link and the most recent head that it can know about. **C** (40:29): And so one more sorry, just to understand it, do the shards also have a consensus protocol or is the consensus only happening on the shards have a fourth choice? **B** (40:41): And so the generally a shard is like one block. Like you go to the, you go to the most recent crosslink in the beacon chain and you probably walk like one block to find the head of the shard chain. But in the event that cross linking is not happening optimally, these previous votes on attempts to be a cross link serve as like votes for the head of that chain. And so you can actually, you actually get like a little sub ghost chain off of the, off of the beacon chain. And so it's not. The consensus mechanism is to fully bring these into the core of the core of the system that you can chain and to finalize them. But the liveness of following them is with respect to these, the fourth choice off of the charge off of the beacon chain. **A** (41:31): The cross links sound really complicated. So if I. So is it true that the beacon chain does not hold all of the data, it just holds what it just means a merkle tree with all of these different proofs. **C** (41:44): Right? **B** (41:44): So the beacon chain has this like core system state and one of the components is a cross link, a latest cross link for each shard which is a reference to the head of the head of that block. And so the data, when I'm, when I'm cross linking these things, I'm making crypto economic claims about the availability and validity of the data back into the core system. There's a number of mechanisms that we use there. We can get into them if you're interested. I was just working on the specs today, but the, the core, to run just the core of the system, I don't need to have all of the data, all the shards, but if I run the core of the system, I can then think and run any number of those shards. **A** (42:27): So if I were to try and query, let's say a thousand blocks before then, would that entail several hops through these different cross links just to get back to like the shard where the data was stored? **B** (42:41): Query from which respect. What do I want to know? **A** (42:44): If I wanted to like proof about something that happened, you know, a thousand blocks ago, how does the logic, how did, how do I retrieve data and ensure that it's true? **B** (42:57): Right. So if I want to, there's a number of things there. If I'm just running the shard chain with respect to the beacon chain, it's just like running a normal blockchain. But if I wanted to make proofs about some other shard chain, then there's a number of mech, like I can go then make proofs through this crosslink to anything about that shard and we add some like extra fun bits called double batch merkle accumulators, which make it so that I don't have to go. It Makes it so that if I'm querying historical things, not querying, but if I'm making proofs about historical things, it reduces the depth of those merkle trees. And really what you're looking at here is the expense of a proof is the depth of the merkle tree. And so you can just, through that single cross link, go as deep into history of that entire chain as you want by jumping back. But there's a couple of like helper accessors, that kind of batch history into these more easily queer, more easily provable things, more easy in the sense of merkle tree depth. But there's probably a few things there. Like, if I want to actually just if I generally, if I'm doing crossroad communication, I'm making proofs against the latest crosslink because I am making proofs about what is, what is there, you know, what is the head of the state. Similarly, if I'm, if I'm going to the Cosmos Hub and I want to communicate about something, I can't necessarily, like, making a claim about the token history a thousand blocks ago doesn't like, really help me make claims about the tokens that I can use and stuff. So the head chain with respect to crosshair communication is primarily what I want. But you do have the ability. And in Ethereum today and in most blockchains, merkle trees are like the backbone of these things. And also because it's a blockchain, you can make arbitrary proofs about pretty much anything in the system and anything historically, it's just a matter of like, what the complexity of that proof looks like and like the depth of that proof. **A** (45:17): Gotcha. **C** (45:18): And so can shards talk directly to other shards or does all communication have to get routed through the beacon chain? **B** (45:25): It has to get routed through the beacon chain. And this is because the, once I start allowing direct communication now I've premised like the state and the fourth choice and the consensus of these separate shard chains on each other. And so if like one got rolled back for any number of reasons, then that could trigger rolling back another chain, which could trigger rolling back another chain. You all of a sudden have, like, if you allow direct communication, you've now allowed kind of this like the entire system to be brittle upon itself and like blowing up the complexity such that you probably just need to follow all shard chains at all times and lose all your scalability again. Yeah. **A** (46:14): So this architecture kind of looks like a hub and spoke architecture. So how do they differ? **C** (46:24): I would say one of the main differences is sort of the requirement where in Cosmos we don't sort of have this as a requirement where you have to talk through the Cosmos hub. It's an optional thing where you can talk to the hub and use the hub as this system. But most chains usually will not have their validity dependent on what the hub accepts. So I think that's sort of the main difference here where E 2.0 the validity of a shard is based on what ultimately gets cross linked to. **B** (47:06): There's a strong assumption that like what I see in the Beacon chain is valid and true. Right. And maybe you don't make those assumptions as much from Cosmos. But then again, I mean if you see something finalized in Cosmos, you may or not, I don't know, probably your application maybe may assume that it's valid. Whereas another thing is the, there's, there's a lot here probably, but one is, one is just the taking of like a single consensus mechanism and attempting to leverage the security of that consensus mechanism across essentially end chains. Whereas I think Cosmos is not actually adding security to any of the chains that are connecting to a tub. And so. Okay, well actually I was curious if that was on the roadmap. So the, the East 2 is, you know, we assume we have this like security budget of the core consensus and that we there are trade offs here but can leverage it across these many chains. Whereas currently I'll talk about this. Cosmos is assumes that these chains have their own security and have their own like token and crypto economic security and all that kind of stuff. Another thing is that the, these chains because of this liveness favoring protocol, they communicate on like a very quick basis, they communicate on like on the purse, they can communicate on a per slot basis. So in current sophistication 12 seconds. Whereas the community, the asynchronous communication of chains via the Cosmos, if they go through Cosmos Hub I would assume is the latency of chain A + the latency latency to finality of chain A + the latency to finality of the Cosmos hub. Is that correct? **C** (48:53): So it would be. So yeah. Okay, so assuming they're talking through the Cosmos hub, so it's a zone talking to a hub talking to a zone, then it would be the latency of consensus on the chain A, then one block on chain on the hub and then a block on B. Is that, is that how long the. **B** (49:12): Block in the hub? **C** (49:14): Five seconds. **B** (49:15): Okay, the. So no, I don't have to communicate through crosslinks via I don't have to communicate through finalized crosslinks I'm communicating through head cross links. So at slot zero, I cross link chain A. At slot one, chain B can make claims about chain A. Those things don't become finalized for on the order of two epochs, which is in the 12 minute range. But they, the applications can very quickly interact and talk across these chains. **C** (49:47): Wait, so but why, but then doesn't this sort of lead to this situation where there might be like massive global rollbacks required? **A** (49:54): Not. **C** (49:54): Like not. Yeah, not rollback finalized state, but rollbacks. **B** (49:57): Of head with respect to. Yeah, with respect to each other. The. So between chain A and chain B, I can look at the beacon state and safely make claims about these, these chains. The beacon chain can roll back and that would trigger rollbacks on these chains, but they would not be conditional upon each other. So if the beacon chain had a fork and a bunch of stuff had happened on these shards, but then the fork switched over here, these, on each of these single chains it would look like there was like a big fork on you know, a single, a single chain. **C** (50:35): No. What if the beacon chain accidentally got the wrong. So so how am I giving a non finalized head state of a shard to the beacon chain? Like how is that being cross linked there? But let's say only 5% of the stake that's assigned to this shard has voted on this on this current head block. Why would that be sent to the beacon chain? **B** (50:56): Well that wouldn't, that wouldn't be like there can be, there can be liveness failures and crossings and so the shard chain can kind of like build in isolation and nobody can make claims about them. A committee that is assigned to a shard has to reach this like 2/3, 2/3 threshold to bring back into the, into the beacon chain. If you had like, if you had only 10% of this committee make the claim, then the beacon chain would not reference the cross link and Shard B would not be able to make claims about Shard A in that event. So if you have live failures, so. **C** (51:24): Sorry, then I don't understand what, in what case can you, can you not wait? So it seems like to. For Shard B to talk to Shard to act upon something that happened in Shard A does have to wait for finalization. **B** (51:39): So a crosslink is not finality. Crosslink is a committee bringing in references of a shard chain into the beacon chain. Finality is this like multi epoch consensus algorithm that happens with the entirety of the validator set rather than a subcommittee. And so. **C** (52:02): Okay, okay, sorry. So when you're Saying that we don't need, we need the committee to shard a, to have it's whatever 2/3 threshold or whatever its threshold is. What you're saying is we don't need finalization on the beacon chain, right? **B** (52:18): So you could imagine a design where like I only allowed cross like chains to talk to each other after we had finale on the beacon chain. **C** (52:26): Why is that such a big like what like is how, how long are, how long is two epochs on like why can't we get. Okay, so why can't we get finality much faster? Like on the scale of like. **B** (52:42): One of the reasons is that we have a shit ton of validators. And when I mean shit ton of validators, I mean like the actual like consensus signing entities in this protocol. And the reason that we have a shit ton of validators is because to randomly sample these validators across shards safely, we need to have large enough committee sizes. And so there's certainly like a message overhead both on the network layer and in processing on chain that is induced by all of these validators. And so that's like a distinct trade off. There is like time to finality and the amount of like consensus participants. **C** (53:21): So yeah, maybe we can. That's a good lead in to talk about the staking, some of the differences in staking. So how many validators do you sort of expect to see on the beacon. **B** (53:31): Chain to launch the chain? Currently it's like 15,000 validators launch. And, and there's a nuance here I'm going to discuss that. We expect in chain operation eventually to have 200 to like 400,000 validators. And the, the terminology here I think is a bit confusing. And I try to make it clear when I, when I talk about it that a validator is a consensus entity. It is a entity of 32, 32 that has its own keys. So it can find things and it's assigned to do things in the protocol. A user very well and a node that a user is running very well may have any number of validators on it. And so usually I think in other proof of stake protocols we have this notion of, you know, a validator can probably have like any amount of stake weight. And usually like a validator runs on a single node and has like a, a single signature and it just has a certain weight applied to it. But because we are randomly sampling values across all of these shards, the accounting becomes one much easier if we can kind of just think of them as like contiguous units and run them around. That's why they have the 32 east requirement. And to get safe committee sizes through this random sampling mechanism, we need to have a number of consensus entities to sample across. And which I think many proof of stake protocols don't have this requirement. And so they don't really have this like notion of splitting up the duties so granularly. **C** (55:13): Why can't someone just have one key? But like, you know, you're, they can, I feel, why link this sampling process to the actual key keys that the people are managing? Like, why can't I just say that? Okay, look, my total stake is I have, I'm running, you know, 32 times 12 or 32 times 100. My ETH stake. Right. Why do I actually have to manage 100 physical keys instead of the protocol just like, you know, treating it as 100 pieces and then saying, okay, you know, you're, you have to, you know, on this epoch, you're assigned to these 30. You know, you happen to be on these 37 shards, and on this one you have four. Your, your signature as a weight of four. Like, why add all this extra overhead by adding so many keys, right? **B** (56:15): So the, the primary overhead you get here, assuming that you're still randomly sampling the validators and like you're kind of splitting them up into logical entities, you're still going to have the message overhead in both on the network level and in the processing. The primary thing that you gain there is you're gonna have a reduction in like the state size from storing all these validators. But that's really not like the primary bottleneck. Even on the order of like 300,000 validators, you're talking about like 30 megabytes of core system state that doesn't really, that doesn't grow unbounded. And so it's, but also concentration time. **C** (56:54): So let's say my node is running with like 10 signature, 10 signers. I have to be assigned to this shard and have 10 nodes. I don't have to do all that BLS aggregation locally. I could just send out one signature. **B** (57:10): So you're saying I'm like 100 consistent entities. I happen to be assigned to three sharps, three of these same signatures across the same shard. Then I can like pre aggregate. You could also pre aggregate if you have different cubes. **C** (57:23): But. Well, I'm saying what if I just used one key, I wouldn't have to pre aggregate at all. I could just send out my one signature so I don't have to waste time pre aggregating. **B** (57:32): Yeah, yeah. And ultimately there's a lot of like little trade offs here. And I think early on we decided that the complexity of the consensus complexity of randomly shuffling and be able to think of everything as different units of about the same weight was easier than building in these like shuffling mechanisms where we have like validators of larger weights and stuff. I didn't actually ever design the protocol that way. And so maybe that assumption is like slightly incorrect. But the, it's been. I think the ease of just like tossing validators around in the protocol was certainly one of the decisions there. And I think the complex, the most of the overhead, I think you don't necessarily get away with by going the other path. It's more of like consensus complexity. And there might be like minor optimizations like you pointed out that come in each direction when it comes to the signing. Signings not signing is not our bottleneck. Signature verification is certainly, you know, a bottleneck. It's really not. It hasn't shown in current test nets to be like the major, you know, major issue. But signing even like If I run 10,000 validators on a single machine, like signing is really not the, not the issue right now. So fortunately it hasn't become the issue. **A** (59:00): What this is brought up on Twitter, which is like how do you, how do you prevent somebody with a ton of stake weight to kind of just like spread that out into various different validators and like stake32 ETH across the board and then if you're doing like a random sampling, that is what they're supposed to do. **B** (59:21): Yeah. So that they, they can we make an assumption that there's not an attacker of larger than 1/3 size. And so the random sampling has to large enough committee size has to do with the assumption that okay, we have a 1/3 attacker and we have, we're going to be splitting all these consensus entities randomly across these committees. What are the chances that a 1/3 attacker would get a 2/3 majority of a committee? And again, regardless of whether you have them stacked on a single validator and you're kind of splitting them into logical units or you actually have them in separate units, this, this kind of assumption that we're making here still holds. And so we have, our committee sizes are like target 128. They can actually be larger. And again, taking a 1/3 attacker randomly sampling them into these committees of size 128, the chances, and there's some bias ability that can come in to randomness, which randomness is kind of an interesting problem. I want to hear about what you all use for Randomness there's like a 1 over 2 to the 40 chance that a 1/3 attacker would randomly get 2/3 of a committee and thus be able to corrupt a committee, but corrupting committees and like this, you know, assumption about a 1/3 attacker and this assumption about how biasable randomness is like this is a crucial, crucial component of these two. And if we have you know, in early, like if this, if this assumption fails and committees can be corrupted, then like you know, we're back to the drawing board. So this is, this is crucial randomness. If you just have a second. What do you all use? Randomness. I assume you have a source of randomness, right? **C** (1:01:08): No, not on the Cosmos Hub because there's nothing in the Cosmos Hub that really requires it. **B** (1:01:14): What about. Do you have single leader election for the blocker? Is it just use like the previous block cache or something? **C** (1:01:20): No, no, it's not random, it's deterministic. So there's a round robin algorithm that we use and you can figure it out and yeah, we haven't really seen or had any issues with it where. **B** (1:01:36): Dosing is one of the big. **C** (1:01:38): We basically leave dosing as a exercise to the validator and so basically validators. Yeah, validators have essentially built up pretty good systems for like creating sentry architecture and stuff and we haven't really seen DoS attacks actually be a issue for a validator yet. **B** (1:01:59): There's something we're like certainly concerned about but in the. We're kind of leaving it to the validators to start because there is some amount of like look ahead in knowing people's duties. Yeah, especially block proposal. But we're very interested in integrating single secret leader election which there's some like promising research but it's just not quite there. Meaning the block proposer at the next slot is deterministic but. But you can't guess it ahead of time. You don't know it ahead of time, only they know it. **C** (1:02:35): I mean there are some, like I said, we haven't included this in the Cosmos Hub yet but there are people who have versions of Tendermint that they've written that do have randomness included. There's two of them that I know of. One of them uses a VRFS and then the Other1 uses BLS aggregate signatures in the tendermint signature. So sort of more like dfinity style I guess. **B** (1:02:58): Right. **C** (1:03:00): I'd be also interested in like getting. Creating some sort of more VDF based one. Just not that I really want to integrate it with the consensus protocol, but I Think it'll be a good addition to have Core in the Cosmos SDK just as a. For use by applications. **B** (1:03:17): Yeah, as a component. Yeah. Yeah. And I mean, to be clear, we're launching without BDFs. We have what's called like a Randao, where people are making prior commitments and then revealing randomness as they go. But yeah, as a tool for users, which it sounds like you'd be interested in integrating it, A vds can be potentially pretty powerful. Yeah, a lot of work to do there though. **A** (1:03:43): Just to timestamp this. Right now we're at the one hour mark and so in another couple of questions, maybe in another 15 minutes, we're going to open the floor up to Q and A from the guests on this call. And so for the guests, you're able to raise your hand or enter your question in the chat box. And so if you have any questions, leave it there and then we'll go through it one by one, FYI. **B** (1:04:08): Cool. **C** (1:04:10): How do you guys. So I think one of the big differences between the staking mechanism of Cosmos and Ethereum is that we've sort of very much ingrained this notion of delegation, while in E 2.0 there seems to be like, you know, no inbuilt system for delegating stake. And so what. What leads to some of these design differences there? Why? What led to the decision not to allow for inbuilt delegation? Because how we think about this, for background of how we think about it, is that it's going to happen no matter what and we might as well build it in to provide safety and security for the people who are using it, rather than. **B** (1:04:51): Yeah, I got you. And the other side of the coin is people are going to do it, so let them do it and reduce the consensus complexity. **C** (1:05:00): But then the problem is then they end up writing their own. Every validator is going to write their own custom contracts for it. It's not going to be very secure. And then one of the main features that from a consensus protocol side that building it in protocol gives you is we built this feature called instant redelegation, where if I want to change, I'm delegated to. What you normally have to do is unbond, which takes months. Is it months or what's the unbonding period? **B** (1:05:32): No, but it depends. It's on the order of like. It scales with the amount of other people that are trying to get out. So exiting is not a viable strategy if people are trying to like, chain. But normally we expect like on the order of, you know, A week to a couple of weeks. **C** (1:05:48): Gotcha. Yeah. So instead of having to unbond, not collect rewards for a month and then rebound with instant redelegation, you can have it. So it's very safe for me to change my weight to who. Which validator I'm delegated to instantly. **B** (1:06:03): Right. **C** (1:06:04): The. **B** (1:06:08): Few things there. I wonder if the. Because you come to finality on like a per plot basis, per block basis. What are, what are the actual slashing like slashing conditions? Like what. What is flashable in Cosmos? Because I think yeah, the two main slashes, the rapid changing of stake makes me like a little bit worried about like these, some of these like more historic flashings where I'm trying to like subvert a chain and wrap around. But I might be naive and it's just like a smell test. Initially I haven't thought about this much. **C** (1:06:50): So the two flashing dishes we have are if we found you double signed on a block or if you are offline, if you've missed some percentage of blocks in a certain window of blocks. So those are the two slashing conditions. **B** (1:07:09): You can be slashed fully for that or you just kind of get exited. **C** (1:07:12): Get like a slap on the wrist and more importantly you get kicked off the validator set until you choose to reach. You get put into a jail period of two days. **B** (1:07:23): So what's the. Is there any worry in like when I'm initially syncing the chain, somebody exposing like alternate histories and convincing me to think some other chain or do they. Are they expected to just like show up with a very recent finalized. **C** (1:07:41): No, no, there's not. You can't really get tricked in that sense. Unless you're talking about long range attacks. Is that what you're referring to? **B** (1:07:51): Well, no, I'm referring to like say a co. And Sorry, we. We're totally not talking about the question anymore if like a 1, some. Some attacker with significant stake weight went back like a day's worth of blocks and. Okay, I see, I see you have. **A** (1:08:14): An unboxing period and so you may you discover it before it ever ended. **B** (1:08:22): Yeah. And I guess by kicking people out of the chain, do you mitigate these, these attacks? Like we essentially are ensuring that the single, the double signing is safe because if you're not signing at all, you can't create an alternate history because we're gonna kick you out. Is that what's going on there? **C** (1:08:39): No, that's just. It's a punishment for having double signed. It's like the point of doing that was. So it's like they don't come back. And it was just a way, it was a, it was a way of resolving this problem of what if people cause do multiple double sign again and again and again. And just to deal with the slashing, like how to deal with the slashing there. It was just a way of easily doing that. But no, it has nothing to do with changing the history. **B** (1:09:12): Yeah, okay, I think I went down a wrong tangent. I'm going to think about this a little bit more after this call. **A** (1:09:17): I just want to end with like one final question with regards to reentrancy attacks and how they're mitigated on E2 or whether or not E2 is still susceptible to that. **B** (1:09:27): And then reentrancy bug where like a contract bug? **A** (1:09:31): Yeah, because in theory you could do cross shard contract. Where do smart contracts live in this construction? **B** (1:09:42): Because smart contracts live on the shard chain. It's like each one of those is kind of an Ethereum, but under this more unified consensus and communication, the cross shard chains are asynchronous. So reentrancy on the cross shard ability doesn't really exist. But synchronous communications can still happen within a single shard. And unless somebody advocates for a serious breaking change to the vm, I don't expect that to change. And I, even if Sonny mentioned there is a push to move these like 63 other shards that aren't the Ethereum that's on VM today to EWASM or to a WASM like variant machine. I even then I don't know if there's gonna be restrictions put on a VM to avoid reentrancy in the single shard. **C** (1:10:34): Are you ready? Said if. I thought that was like sort of a pretty certain part of the road. **B** (1:10:38): Yeah, you know, it's, it's debated. There is, there's a, the EVM actually like anytime that people try to optimize, you know, like low level operations in EVM and optimize low level operations like you know, doing cryptographic operations in WASM today with the interpreters, they're actually able to get like similar speeds. So one the main, there's kind of a push and pull here. One reason to go WASM would be to embrace like an open widely, likely to be increasingly widely used standard to leverage, you know, optimized interpreters, potentially other like smart contracts, language impaired, like you know, constructs that other chains are using and things. But the argument against that, assuming that we could get like a similar speed and that was not, not the Argument, which there's still like some debate there. But the argument against that would be the EVM is like widely used in the Ethereum ecosystem. Tons of people continue to learn this and build applications on it today, so just fucking go with it. But this is, there's some experimentation going on. There's a, there's a cool research that the, actually the EWAGM team is doing called E1X64, which is just simulating 64 EVMs and building out like Crusher Communications and building out simple applications there. And to inform, better inform this decision, one thing that we didn't talk about, which I kind of wanted to get your take on, was assuming, okay, we have these cross links. From the perspective of the Shard chain, I make decisions, obviously I can be reverted if the beacon chain reverts, but from the perspective of my current chain, I make decisions about other shard chains with respect to this cross linked route. And so from the local chain, it's kind of like looking at it as though it's finalized and the nuance of that and what that means and with respect to reversions and things outside of what I want to talk about here. But let's assume I'm in Shard A. I see a crosslink I make. I essentially am making local finance decisions on that shard with respect to that crosslink. And so I'm assuming, you know, from the perspective of ibc, I'm assuming when you know, a claim is made about chain A via IBC and something finalized, it's probably in the form of like a 32 byte root, maybe some metadata is. That. **C** (1:13:34): Will include the signatures and everything. **B** (1:13:38): But when I. Okay, so, so you bring in headers into the comments chain. **C** (1:13:43): Yeah, so you, you, you, you send over the entire header which includes the signatures, verifying that it's a valid. It is the finalized header. **B** (1:13:51): When I actually am bringing in information into the say chain B about chain A, am I bringing in the header or am I just making a claim about like the root of the header and. **C** (1:14:03): No, you bring in the entire header. **B** (1:14:05): Okay, interesting. So you, so, okay, so I was gonna say if IBC operated on a Merkle root abstraction and like, you know, this being a finalized Merkle, then you could easily like use IBC in this context. Whereas you could, you could actually via that Merkle root, bring in the actual header and then use ibc. But then you bring. There's just an additional, there's additional overhead there. But totally, man. **C** (1:14:33): So don't you need, when you give a head like A merkle root of a shard to the beacon chain. And don't you need to give proof that it's a valid like it has all the signatures and everything. **B** (1:14:48): So there's a single signature in a shard block and that's the proposer. The validators that are making attesting to and attempting to make a cross link about this shard are signing a separate message. This is called an attestation. And so the attestation could potentially serve as the header, honestly. But I don't know, I'd be curious to look a little bit more because I think that there's probably a version of the messages that are in East 2 that could easily someone could use IBC to communicate, you know, handle their communications, which is an interesting avenue. I know, I know it's something that you all at least thought about a little bit, but we can talk about it some other time. **A** (1:15:35): I'm going to open the floor up to the participants to ask questions. So everybody's going to be able to unmute themselves. So does anyone have questions? You're able to unmute yourself now? **B** (1:15:52): Luigi's been making me hungry this whole time. **A** (1:15:55): That steak. **B** (1:16:03): Adam to the moon. Cyrus is reading the chat. **A** (1:16:10): And you know, just before everyone goes, if, because we're going to open this up to everybody else, you know, before you leave. Come back in two weeks if you want to learn about Ethermint versus Ava with Emin Ganzir. If you liked this content, we have Cosmos Unchained every other week until the end of the year. So just follow up, be on it, get in the know. We got a thumbs up from Jeff Flowers. **B** (1:16:37): Okay, I'm going to ask a question then. Does Cosmos Tendermint have plans to integrate or does integrate Ethereum today with proof of work or is that not in scope? **C** (1:16:54): Yeah. So currently we do not use a native IBC but using what we call pegi mostly because IBC is expensive to write in the EVM today. And so what PEGI is, it's essentially the validators of a Cosmos chain act as sort of oracles to the state of Ethereum. And then there's a contract on Ethereum where they can basically hold the assets. It's basically like a poor man's ibc. It has less to do with the fact that about the proof of work, it's actually more the limitation on the. **B** (1:17:33): Ability to write efficiently in there. Yeah, that makes sense. And then another question. Are y' all primarily using light Client proofs are pretty crucial here, is that, is that the Avenue or are you like processing like full signatures and stuff? **C** (1:17:51): No, yeah, it's like client proofs that are. Yeah, we use like kind proofs and then you know, IBC is extendable where you know, you can have, have other types of proofs as well as you want as well. So if you have, I could do. **B** (1:18:06): The full thing, but it's just. **C** (1:18:07): Yeah, you can do fraud proofs, you could do data, you could require data availability proofs. It's like a very modular system where it's up to you to decide what is, what is necessary to, to accept this IBC block. **B** (1:18:22): Gotcha. **A** (1:18:23): E.J. jung raised her hand. Have a question? **B** (1:18:28): Yeah, I have a question for the ETH 2.0. As far as I know the assignment from a contract or the account ID to a sidechain is static. So if I feel kind of malicious if I write some sort of a contract that involves the addresses across the side chains, like involve as many sidechains as possible and keep sending transactions across all the sidechains, I don't think it should be able to slow down any other side chains with syncing with the peak on chain. Is that right? Or is that like as long as I'm willing to bar naive to put all sorts of transactions through, is it possible? So each of these, each of these shard chains exists, has its own like gas limit, block limit, whatever abstraction you want. There's like a, there's a finite ability of each of these chains to process transactions and things. And so if I'm, there's two things. One, I could spam them all. It's just going to cost me money. Better yet, I'll put a Ponzi scheme on them all and people spam themselves. But I think the question was more if I am on a SAM on shard A and I want to communicate with like five there's I make a transaction that tries to hit like all of those other shards each of those, if there's an asynchronous each of those transactions like so I hit chart A that's going to have these like asynchronous transactions that come off of it. Those actually end up looking like just normal transactions on the other ones that I have to pay for. And so this mechanism because, because these shard chains are isolated from each other and because anytime I interact with any one of them, it doesn't really matter like the source of that interaction. I just, I have to pay for it. So yes, I can spam these things, it will become prohibitively expensive. Assuming that There's a lot, you know, other activity on the chains. If there's not a lot of activity on like Shard 10, then like maybe the gas price is pretty low, you can spam the hell out of it. But, you know, it's not the, the core protocol doesn't really care what you do with it from that standpoint. Did that answer your question? I hope so. Very cool. Nice thumbs up. **A** (1:20:55): Drew's iPhone asks can Sunny elaborate on his eth is more like real estate in Manhattan where Cosmos is a house in the suburbs. Analogy. **C** (1:21:07): Yeah, sure. So the analogy, yes. This is actually not something I came up with. This is something that Luis from Aragon came up with and he basically made this great analogy I really liked, which was, you know, Ethereum, if you're building on Ethereum today, it is like great place that it's, you know, a lot happening there. Everything is super composable and you know, there's a lot of money there to that like, like liquidity there in order to build stuff on and like Defi stuff. And so, you know, just like how most of finance industries in Manhattan, you know, if you want to build something in Defi, you should probably go build it on Ethereum. But now at the same time, you know, Manhattan is this like crazy expensive place to live and it's not really for everyone. And so Cosmos is basically the house house in the suburbs where having your own application specific blockchain is, you know, being able to, you know, when you, you know, the American tree, you have your own like white picket fence and you are, you are the owner of your land and you're not, you're not renting land for, from someone else. You know, like you're in Manhattan, you're usually just renting from the landlord or you're renting from Ethereum. **A** (1:22:27): So when you're living, when you're living in a major metropolitan city like that, you're also paying tax to the government that provides security for you. Whereas if you live in the outskirts of the city, you kind of have to protect your own land, like in Cosmos, and then you have to like tote your own guns and everything because you're sovereign. And so like, because it takes like 30 minutes for the cops to come if you call, you have to defend yourself. **B** (1:22:54): Right, Which I was going to say, I think that the one thing that the analogy is missing, that there's like a, there is a, this like security trade off there. And that's why I question, and I'm all for all the different experiments, I question The Cosmos thesis a little bit. I don't know. I don't know if we're going to see thousands of viable chains. I think we might see many. But I'm worried about when I see application, specifically blockchains. I'm worried about having enough security on that blockchain, obviously, like if it's a small blockchain, maybe it's not worth attacking, but like people can attack you. Whereas. So I was. And maybe we don't have time for this day, but Sunny, I was curious to hear about the roadmap of Cosmos Hubs lending security or maybe providing security to some of these heterogeneous chains. **C** (1:23:41): Yeah, I can talk about that right now. I think we have time. So. So two things I would say is one, that. So about the. Why I think it. So have I told you about my like whole, what I call the maker dilemma? **B** (1:23:59): I think I've heard a version of it, but maybe. **C** (1:24:02): Okay, yeah. Essentially it's my claim that Maker doesn't get any security from being on Ethereum. Basically, if you wanted to attack Maker, you wouldn't attack mk, you wouldn't attack E. You, you would rather attack the security of the MKR system because it's much lower security. And if you could take control of mkr, you could essentially destroy all, you know, you could steal all the collateral from. From MKR and, and you can like, you know, mint as much die as you want and you can basically destroy the entire system. And the same actually ends up being true for. So Paul Stors, he wrote this article like many years ago called Smart Contracts are Oracles. And I never quite understood it until recently, which I disagree with when he says all smart contract oracles, but I agree with the notion that many of them are. So if I wanted to attack Augur, I wouldn't attack eth, I'd attack the security of the REP system. If I wanted to attack, I wouldn't attack eth. I would attract the security of the 0x token. And this is just true for so many things. And so essentially I would make a claim that many systems are not actually getting that much of their security from Ethereum, but then they end up paying taxes to Ethereum in the form of paying transaction fees denominated in eth. This is my sort of reasoning for why I. I mean, I have a. I have a one MKR bet that Maker will split off onto its own chain within three years. **B** (1:25:40): Right, And I see that, but there are certain things that the core chain provides that you've now limited from the Attack scope. **A** (1:25:48): Right. **B** (1:25:48): You, you provide, you know, non reversion and you provide correct execution. Whereas if I now I'm in my own chain, I've like, I've taken some of the things that the core chain might provide me and I've added that into the attack scope of that system. So I do agree, I do agree that there are certainly like the avenues that you take to attack subsystems of Ethereum are to attack those, you know, to attack those economics in particular. But, but I don't necessarily agree that there's no value that the system does provide to these applications. There's also, there's, well, there's things other than security that I think are valuable also to having like existing in a single. **C** (1:26:43): Yeah. So I mean I think the main value, like I said, comes from those composability which is why I like the Manhattan thing where there's definitely, I agree there are massive benefits. I live in Manhattan. I think there are, I think there are massive benefits to it. But I just think that for many use cases security is not. There are obviously a lot of them that are so like, you know, Uniswap is an example of something that actually does in fact get all of its core security from, from Ethereum and there are many, many others like that. But onto the second topic which is about the shared security, it. So the way I can't, to be honest, I can't speak on behalf of the entire causes community because you know the world before you launch something and after you launch something. Architecture design is very different because you know, before it, you know, you designed the roadmap. Now it's like, oh, governance decides to roadmap and I can propose things. **B** (1:27:44): You have users with feelings. **C** (1:27:48): Yeah. So my personal proposal for how shared security works should work on the Cosmos hub is it actually doubles down on this notion that not everything needs equal security, which is the premise of Cosmos. Where we want is the ability for. But let's say you've built a new chain. You can basically show up to the Cosmos Hub and say, here's my chain protocol. And then validators can look at it and say they can look at. Okay, this is the rewards, this is what the use case of it is. This is how much we predict how much we might be able to make from fees on running this chain. If this is something we want to run, unlike ethereum or Ethereum 2.0 which is like, you know, here's a big value set and it assigns like okay, U50 through this shard, U50 through this chart U50 to this chart. In our system what we say is each validator can make their own decision to say do I want to run this chain or not? And so you will maybe have some smaller hobbyist validators say yeah, you know, we're only going to run a few really high values chain. And then maybe you have some more professional style validators who say, you know, we have, we can scale out basically infinitely. You know, we'll just run as many shards as we want. And when they do that, you know they're putting their atoms at slashable risk for any faults that they commit on these new chains as well. And so what's going to end up happening is gonna have some chains that have all thousands of validators on the Cosmos hub co validating them and you'll have some shards. I'm going to use the word shards. I don't know what the name is going to be but that it will maybe have like 20 validators. But you know what, that's okay because the MakerDAO chain doesn't need the same security as the CryptoKitties chain. **B** (1:29:41): Yeah, so do you. So you imagine some sort of like the hub. At the hub I can maybe elect to say I'm running this or maybe in the chain I signal and the chain looks at the hub and goes yeah, you're a validator now you're an rsa. **C** (1:29:57): Yeah. So you can imagine that like the staking system for all these other chains is actually on the hub. And so you stake on the or you know, you use items that you already have staked. But you basically say I'm making these slashable for faults of this thing. And then over IDC it lets the chain know that hey this, this validator added to your tendermint. **B** (1:30:22): And so the slashing conditions on the hub then have to be like more generic and accept proof. **C** (1:30:28): Yeah, you can define your slashing condition. So by default it will automatically built in. It'll be able to accept like 10 dimensional conditions. So like the liveness and double sided one. But yeah, you can write more slashing conditions in like webassembly blobs where you can say okay, let's say this chain, you know you could basically define more and more slashing conditions. **B** (1:30:52): The one worry I have, and I'm sure there's ways around it, I just haven't thought about it much but the like race conditions and me doing something flashable so that I can effectively like take my one stake that will be ultimately flash but like do eight things with it on the different chains before and you know, conduct some attack across many chains for the value of just my one stake. Yeah, that would be the kind of stuff that I think would be the, that'd be the fun stuff to think about. **C** (1:31:19): Yep. And yeah, so those are, that's some of the things that we're trying to figure out particularly is how leveraged can you be on that? Like how. **B** (1:31:30): Right. **C** (1:31:30): This is something generally in Cosmos where I'm personally of the opinion that it's okay for the system to be securing more value than the underlying staking token itself is worth. Because I think that a large percentage of the security actually comes from coordination problems where I think like, you know, the fact that you need to get a third of the validators to coordinate, I think we get more security from that than the actual slashable amount. **B** (1:32:01): Yeah, interesting. **C** (1:32:03): And so how should. So yeah, this is. So I think a lot of the people in the Cosmos community are generally in favor of this sort of heterogeneous, non, non uniform shared security. But then the questions now then come around are things like this, what should, should they be allowed to be over leveraged or not? And if so, how much? And a lot of questions around that. **B** (1:32:28): Yeah. Adriana, would you. **D** (1:32:32): Hi. I have two questions for Danny. So first of all, since he has like a lot of insight into the Ethereum ecosystem and community, first question will be like, how does the Ethereum community from his point of view think of Cosmos? Like what are the like advantages and disadvantages that he thinks that the community views us in terms of. **B** (1:33:12): Network? **D** (1:33:12): Yeah, yeah. **A** (1:33:14): How do ETH maxis look at us exactly? **B** (1:33:18): I mean there's a difference in the ETH Maxis and the ETH community. Right. Like the ETH maxis are going to be like, like, you know, we don't need Cosmos because we're just gonna like have roll ups and scaling and like you can do it all here. Whereas I think a lot of the community is probably like friendly and latent interest. I think that as the Cosmos ecosystem builds out and as, and if many of these like chains come on board and it's easy to communicate with them, then like people, the Ethereum ecosystem is generally all about leveraging the tools at their disposal. And so I think they'd be leveraged because if there's interesting things that they can do to add value and to do new stuff with applications, the Ethereum ecosystem is going to be like, okay, let's do it. Whereas again the Maxis are going to be like, we don't really need it. But maybe it's somewhere in between. Okay. **D** (1:34:09): And the second one was about smart contracts and dapp. So currently right now the situation is like Ethereum dominates this entire market of smart contracts and dapps. But now with the addition of smart contracts, we also have on the Cosmos hub a proposal which passed for the addition of smart contracts. How does he view, how do you view in the, in the near future this market of smart contracts? Do you think that there will be like a migration from Ethereum to also chains like Cosmos or Polkadot or how do you view that? **B** (1:34:57): I think optimistically I have to say that the tide is still rising and the amount of what that the amount of activity we've seen on these chains is minuscule to compare to what we're going to see in five years. And so like there's probably room for growth in all sorts of places. I don't know. I think that a lot of kind of diehard Ethereum people are really angling and looking at the roll up space as a place for temporary relief to be able to continue to do what they do on an Ethereum related environment. But you know, you also see a team here and there that's like, okay, we're moving to this, we're doing that, we're trying this. So there's, there's a ton of experimentation and like I just expect a lot of growth in the, in the ecosystem, in the, in the sector in general. So I don't know, I mean you'll see a lot of fresh people go to Cosmos and you might see some old Ethereum people go to Cosmos, but you also like people are just figuring it all out. It's a grand experiment right now and I expect a lot of growth, revival. **C** (1:35:59): I personally, like what I think is I just want to see more and more evolution on the smart contracting systems. Like you know, like Danny was mentioning it. If it turns out that in E 2.0 we just have a 64 EVM, that would make me really sad. I, you know there's a handful of. **B** (1:36:19): People that agree with you. Yeah, like no. **C** (1:36:23): Yeah. I don't know. I think that one of the reasons I started working on Cosmos was because I just got fed up writing solemnity. I think that there's just a lot of things that I want to see more and more development on smart contracting systems. And what I do think is that we'll see a lot of smart contracting systems become cross platform. So for example, in the Cosmos SDK we have a EVM built into the Cosmos SDK that you can use and run your smart contract on that if you want. And then at the same time the smart contract system that was added to the Cosmos Hub is going to be added. It's in a very limited sense where it's sort of governance permitted smart contracting. So you submit a contract and then governance of the Hub can choose to approve whether to add that contract or not. **B** (1:37:16): It's. **C** (1:37:16): It's less of a smart contracting system, it's more of a way. It's like an alternative development of the Cosmos Hub. So if you don't want to run in go core modules, so if you want to add new IBC integrations, you could do it easily using that. But what I've been personally trying to suggest to the CosmWASM team, the one that's built that is, I think that they should start building this as an ETH 2.0 execution environment as well. And so you'll have the same Cosmos execution environment, both as its own native train as well as an E 2.0 execution environment. And I think being able to cross have those interact with each other would be really cool. **B** (1:37:57): Yeah. Question is Cosmos, is it interpreted or compiled? **C** (1:38:01): It is. Oh, I think it's okay if you. **B** (1:38:05): Don'T know the answer. **C** (1:38:06): What do you mean? Is the. **B** (1:38:08): So does the virtual machine do? Is the WASM compiled or is it just interpreted by a virtual machine? So like there's potentially some speed gains with compilation, but there's also like a whole can of worms with like the consensus having to compile like people having to compile things locally opens up like a can of worms with like compilation bomb. **C** (1:38:29): When they started working on it a year ago, they started it at the Berlin hackathon they were using. Using wasmd, which I think is interpreted. Yeah, yeah. But I, I'm not sure. I haven't actually dug into the technicals of. I dug into it like a year ago. I haven't looked at what they're using today. **B** (1:38:48): Compile code is extremely difficult to meter and you know, have strong guarantees about what it's actually going to do. But. **C** (1:38:57): Well, they have like, they built the wasmd stuff that they built. They have like the metering and like. **B** (1:39:04): Yeah, okay. **C** (1:39:06): It's actually a full on one of the things that like, you know, when people say VM, like smart contract VMs, it's like very vague rather they're talking like the bytecode or are they talking about this entire execution environment? **B** (1:39:17): Yeah, for sure. **C** (1:39:18): Yeah. It's like this. It's a fully designed execution environment. So I'm Pretty sure it's using interpreted. **B** (1:39:24): Yeah, yeah, yeah. **A** (1:39:30): I want to go back to the previous topic about roll ups. I'm just curious, Danny, what's the particular flavor that you feel is most viable in their current. **B** (1:39:40): I mean this is not my, it's not my domain. And I'm the first to say that layer two is always exciting, but it's also always this thing that's about to solve all of our problems. And then we get far along and we're like, they didn't quite do what we want. So I'm, I'm very like, I'm optimistic about roll ups. That's not. I didn't mean to like have a. There are optimistic roles, but I'm optimistic about roll ups. I'm optimistic that they might become a very solid component of the picture. But I want to see some, you know, a little bit more execution and we're beginning to see some before I claim that it's going to solve all of our problems. **C** (1:40:18): The. **B** (1:40:20): Roll ups that are primarily for token transfers and dexes, I think they're cool, I think they're exciting for some use cases, but they're not going to solve all of our problems. Optimistic roll ups, I haven't looked into them too recently. The delays and going back into the main chain and the complexity of main chain fraud proofs and having to execute everything is a little bit scary to me intuitively. But there's a lot of smart people working on it. The holy grail is, you know, general purpose ZK roll up where you can do all sorts of computation and run smart contracts in the roll up land. And I think that we will probably get there, but I think that we're you know, a handful of years out to be able to handle those types of complexity in the circuits. I think that when we get there we'll also be seriously looking into and Cosmos would probably be as well looking into all sorts. Like if we can do that kind of stuff in rollups, we can do like really fancy stuff in layer one with respect to like proving computation and state transitions without having to actually do it. And there's a lot of like scale and privacy and even all sorts of like light client proofs and things that I think in like the three to five year time horizon there's going to be like a lot there but not tomorrow. **C** (1:41:42): It's funny because I actually feel like optimistic rollups is what sharding used to be like three, four years ago. And then the scope of sharding has almost increased to also involve solving These data availability concerns and all these other things. But like you know, I feel like the original ETH 2.0 or you know back then it's called the Serenity Roadmap was essentially Casper plus optimistic rollup. **B** (1:42:09): It was, it's always been about randomly sampling consensus participants so but I don't know exactly what care was taken to the data availability problem at the time. But certainly like the notion of sharding the crypto economic protocol like and splitting participants across randomly I think has been pretty core for a while. But I don't know, I know that there was this thing Vitalik published years ago called shadow chains and I think pretty much what optimistic roll ups are shadow chain. So there's been ideas have been floating around. Yeah, I mean hopefully we get some really cool stuff. I know that like even, even outside of the roll up land like layer 2 on, on Ethereum and other other chains is like continuing to develop. So I think it, if we get layer two right it's gonna always be important to the puzzle because I think it will always be able to be faster and cheaper than bringing things on. You know, having all of your interactions on the layer 1 chain regardless of how much scale if you you have even if you're sharding. So it will be a piece of the puzzle. But we still, we're still figuring it out. **A** (1:43:20): 64 shards as opposed to like n number of shards. **B** (1:43:26): Why 64 the so previously we actually had 1,024 shards and this was due to the over. There's a certain amount of like there's a certain amount of overhead in cross linking and you have to be able to split the consistent participants across these shards in a safe way. So you have to have safe committee sizes. And so when we had more shards cross linking essentially just took longer. So you'd be able to cross link one shard every epoch instead of one shard every slot. And that rate of cross linking defines the rate of asynchronous communication between shards. And so actually after conversations with developers, community members, DAP developers at devcon, it became clear that if we could make a change to have cross linking faster, that would be an even fewer at the cost of fewer shards. That would be like a, you know, a valuable thing to the community. And so the reduction in shards and the increase in the amount of cross links while still ultimately we have to preserve these like safe committee sizes is where you come up with that amount of shards. You know, there's a lot of scale there. The idea is to have actually Relatively large blocks on the order of like 256 kilobytes, 500 kilobytes and to still have like a high data availability of the entire system but we still need to do a lot of real world network testing to see. **A** (1:45:05): I have a question from Jim to Danny. What is Danny's favorite non Phi Dapps that he's most excited to see once E2 is out? Non financial dapps. **B** (1:45:27): Does a Ponzi scheme sound as a dap? As a Phi Dap? No, I'm just joking. **A** (1:45:32): Yeah, I don't know. **B** (1:45:35): I think that and this is a little bit of a cop out answer because I just, I don't spend a ton of time in the DAP world but the, the ability to. I think there's going to be like with this amount of scale and especially with the amount of data availability of the system, which I think is like a new component of ETH to it is to have this like high data throughput. I think that the things, like the things that are going to be awesome on this thing haven't really been thought of yet. I'm really, I am really excited about the DeFi ecosystem. I think like having open financial access and open financial instruments is like super, super exciting and moving the right direction. But I don't know. I'm sorry, I don't have an answer to this. I feel like I certainly should because even some of the stuff that I, that I keep thinking about it keeps like going back to financial because like insurance is kind of cool but that's financial. Like prediction markets are cool but that's financial. I guess all financial NFTs are cool but like people trade them so they kind of become financial. I don't know. Gaming is fun. Sorry, I wish I had a better answer for you. What about Sunny? What are you more like regardless of on E2 or on specific application chains outside of financial applications, what are you. **C** (1:46:54): Excited about outside of financial applications? **B** (1:47:01): Sovereign identity. Like I think that that's a crucial component of this whole thing like leveraging keys and crypto economically secure ways. **C** (1:47:10): Like maybe that I was super interested in anything that has to do with like reputation systems. I'm just, I'm just obsessed with anything to do with like web of trust style stuff. And so yeah, I mean a lot of the, a lot of the use cases or some of the use cases I think about are like financial but some of them are not. Like you know my dream is I'd love to be able to help create like a web of trust based Consensus protocol so we can get rid of proof of stake completely. And so yeah, if anything with like reputation system and one trust, you should. **B** (1:47:50): So Ken Lu is one of my favorite writers. He has recently released a new collection of short stories. And there's a. There's a short story that I think y' all would all be interested in reading called Byzantine Empathy. And it happens to involve blockchains and reputation systems. **C** (1:48:05): Okay, let's see. Check it out. Yeah, I think I have a link. **B** (1:48:10): To it right here. Oh yeah, yeah, they publish it for free on the Internet. I'll share it with you. **A** (1:48:15): Is it on Amazon? Okay, we'll add it to the show. **B** (1:48:19): The book, the collection of short stories is, is a whole thing. But this, I think this, the full story that I was talking about is actually published on the link I just shared. **C** (1:48:30): Cool. Yeah. **A** (1:48:33): Well, 10 more minutes. Last questions. Do you have less questions before sign off? **B** (1:48:39): Luigi, are you. Are you eating a steak tonight or are you. **A** (1:48:44): That is a marbled steak. **B** (1:48:46): I know, it's so beautiful. **A** (1:48:49): That is bond. **B** (1:48:52): Like it's his proof of stake protocol. That joke is so overplayed. But you know. **A** (1:48:58): No, I said it's bonded because it's premium. **B** (1:49:02): It's the good and wrap. **D** (1:49:05): Okay, I have one, one more left. If no one is asking. **B** (1:49:10): So. **D** (1:49:14): Danny, the favorite thing about Cosmos and Sunny, the favorite thing about Ethereum, Just one thing. **B** (1:49:22): There's a little bit of cop out. But like this is a grand experiment we have. There's so many different trade offs and so many different avenues and like Cosmos is doing it way differently than anyone else. So like we. It's excellent to see how it plays out. I. And just like mad respect. I know Cosmos in many ways is like pushing on the frontier of actually secure proof of stake and actually implemented it. You know, getting that thing into production is badass. **C** (1:49:53): Yeah. To me, Ethereum, I think it's just the ease of use. Like if I want to write something in the Cosmos SDK, you can't just spin up something quickly. But like for a last week I was, you know, I rewrote essentially like a put option system on insolidity in like 20 lines of code. And it's like, you know, if you wanted to do that on. Well first off, if you want to do that in like the traditional financial system, you know, you got to go pay like a lawyer, like two hours of like work to go like write you these like contracts. Now if you want to do that on Cosmos SDK, it'll take you like maybe like two hours to write like scaffold an entire chain to do some simple operation like that. While on Ethereum, it took me like 20 minutes to write it and deploy it. So that was like. **B** (1:50:48): And that, you know, that really was like. When I first got into Ethereum, that was what I expected to see a lot more of is these like small like functional contracts that like link together in really interesting ways. Whereas I think the ICO thing really pushed it towards making these like larger protocols that try to encompass a bunch of things inside of Ethereum. So with things like Uniswap, I really hope that we see more of this small little things that do one thing really well. We'll see. **A** (1:51:20): Luigi asks in eth2 there was the Whisper protocol for chat apps like status. In eth2 is the still available for censorship resistance chat applications. **B** (1:51:33): Right. So I don't know the current status of Whisper. Whisper, I think the only actual production implementations were like relatively toy and weren't super production ready. I believe that Status either has taken over the stewardship of Whisper and is pushing on that protocol or they actually have. They might have done some like fundamental rewrites. So. So it's something that I would love to see exist, but it's not something that there's not like a single unified protocol that's like a core to the stack. But Status specifically is building something there. I'm not super privy to it. So if they can build it on Ethereum today, likely it would be usable on E2. **A** (1:52:25): I know a handful of applications that are addressing this problem within the team, like within the Tendermint team itself. We're building Dither, which is supposed to address this problem. There's like some projects building on Handshake that also does this in a censorship resistant way. Okay. Dogemos asks how do you think the potential concern for exchange centralization due to initial liquidity, sorry due to initial illiquidity should be addressed for Phase 0 of E2? **B** (1:52:58): I see the push and pull on that and I see the argument for users going to Exchanges because of potential liquidity options. But I also see the argument for this kind of being a boon to hobbyists and early takers getting involved. We will certainly see staking via Exchanges. I think that there's also a strong ethos in the community to stake on your own and it's a major component of the design is to ensure that this software is built such that I can run it on pretty resource restricted hardware. So there's certainly a concern. There's always a concern. There's maybe some like additional concern in phase zero, but I think that the early adopters are going to be largely hobbyist and we're going to see a decent distribution. But I wish I could predict the future on this one. I'm curious on the Cosmos front since you all have this been running for what a year and three months now. What's the institutional. I know there's like probably a little bit more institutional staking built in because of the way delegation works. But, but in terms of actually like exchanges and these really large entities that maybe aren't aligned, what do you all see on distribution there of staking? Like does Binance have a big pool that they run or a big sticker they run? **C** (1:54:27): Yeah, Binance is a big one. They have about, you know, let's say about 5% of the stake but they don't like how people delegating to them, they just have it. So if you hold atoms on Binance. **B** (1:54:42): You get a return. **C** (1:54:43): Yeah. And then you know, Coinbase does the same thing with Tezos and I'm sure these exchange, basically every exchange will probably end up doing something similar with Ethan. **B** (1:54:54): What's the largest entity on Cosmos that does that? Not delegation, but actually just staking up people's atoms, Finance. **C** (1:55:04): Polychain like stakes all of their own atoms, which is quite a bit. And so that they also run a major, major validator. **B** (1:55:16): What's that one. **C** (1:55:18): Just called the Polygon? **B** (1:55:19): Sorry. The amount. **C** (1:55:21): Oh, oh, let me check. I don't know, probably around like 6% or so. **B** (1:55:27): Okay. **C** (1:55:28): Yeah. **D** (1:55:28): Currently the biggest one is steakfish with a maximum of 6.88%. **C** (1:55:34): Yeah. **B** (1:55:35): Okay. That's actually pretty healthy distribution. **C** (1:55:38): It is. **B** (1:55:38): I'm worried like, I'm you know, certainly worried about like 20% players showing up, especially with the, the exchanges, but hopefully we, we get a better distribution than that. **C** (1:55:53): Yeah, the community has actually been pretty proactive in trying to make sure that doesn't happen. Like, you know, I think the community, like a lot of the community wouldn't want to because they understand that like, you know, delegating to a player that has 20% actually ends up creating massive. **A** (1:56:11): There's also good enough high quality client side mobile apps that allow you to do non custodial staking and so there's very little to leave it on in exchange. **B** (1:56:24): Nice. **C** (1:56:26): But at the same time, you know, a lot of the validators, a lot of the exchanges joined later on in Cosmos like a year into the staking. But if like newer, if you look at some of the newer chains that launched more recently. A lot of the largest validators are exchanges, and exchanges do actually have a much higher percentage, so. You know, the other chain that my validator company, seca, we run on is a chain called Kava, and in that, Binance has, like, about 12% of the stake, so. **B** (1:57:01): Gotcha. Are y' all still doing zero fees? **C** (1:57:06): No, we have been charging fees for eight months now, so. **B** (1:57:12): Okay, I remember that was a very controversial move. **C** (1:57:16): Yeah, yeah, we did that for, like, the first six months of the Conscious Hub, and then we started charging fees. **B** (1:57:23): Nice. Is there any additional overhead to being a staker, like taking on other people's delegation? **C** (1:57:31): What do you mean by overhead? **B** (1:57:35): Either. Either in the form of risk to your own assets or requirements for your hardware and uptime and things like that? **C** (1:57:44): No. **B** (1:57:44): No. So you could rationally run with, like, near zero or zero fees? **C** (1:57:50): Yes. **B** (1:57:51): Yeah. **A** (1:57:54): If you guys want to continue the conversation, we can continue to host it. I'm just going to stop the recording now and, like, officially end it, but if you guys want to keep talking, just, like, stay on. No worries.