Personally I am not a fan of rollups.
It is sort of the opposite of the Blockchain ethos ( a single source of verfiable truth open to everyone).
However zk rollups kind of fixes that problem because you can run a computation off chain but prove that you did the computation honestly and post that proof on the main chain.
However with EVM apps, you have this problem where if you want to interact with a Dapp, that Dapp has to exist on the network you are currently on.
Ie. Dapps on optimisim can only work for you if you have an account on optimisim. Dapps on Loopring only work if you have an account on Loopring.
This creates a fractured eco system where you always have to keep bridging to different networks to avoid fees and network congestion.
Vitalik also acknowledge this user experience pain point on his latest appearance on the bankless podcast.
Also the EVM cannot run in parallel unless you fundamentally change how it works so by it’s very nature it will congest any layer 2. ( Tezos blockchain is not that much different either).
This also applies to the avax c chain. It’s actually a pretty simple calculation to make also, all you need to is get the gas limit of a block, get the gas needed for a simple transaction and divide.
So for avax c chain with 10 million gas per block, 21000 gas per transfer, 10 second blocktime, it averages out to roughly 50 TPS, if we are only doing simple transactions.
That being said I know the founder of tezos is in support of rollups.
To me, it seems like if you support layer 2s, that means you just support sharding with extra steps. Why not just shard your Blockchain in the first place?
Also for me personally, I don’t see the average person switching networks so they can just transact with each other.
Layer 2s are not very user friendly. I think the ultimate goal for a Blockchain is to shard their eco system for scalibilty but hide and abstract away the sharding from developers and users.
The users should be able to transact on the Blockchain as if it is one chain but under the hood it’s shared.
My answer is below:
In terms of ethos, I wouldn’t use Optimism as your reference point, Arbitrum is a much better design.
Generally speaking, all arguments for monolithic execution are better arguments for heavy-duty rollups, even if it means a single rollup for one chain (similar to Flow). To me, this is the killer argument in favor of rollups. You keep the nodes maintaining consensus lightweight, where an honest majority matters and decentralization is paramount, and you push heavy-duty execution in an environment where you require a single honest party instead of a majority.
The one caveat to this is parallelization of execution, for example with multiple cores. While it’s possible to do that within rollups, it requires more work — you have to treat the execution trace as a graph instead of a sequence. So does this mean we should eschew rollups to focus primarily on parallelized execution at L1? Not really:
To begin with, aside from some low-hanging fruits in terms of parallelism (signature verification for example), which can be special-cased in rollups, it’s quite hard to get a gas model that works for parallelized execution. Optimistically running code in parallel and rerunning conflicts is a common strategy, but you cannot predict the gas cost for it. It’s not enough for a parallelization scheme to be fast, we have to be able to predict how slow it’s going to be in the worse case to set gas limits. Mechanisms that do work end up resembling L2: you divvy up part of your ledger in zones and allow parallelized execution when transactions commit to staying within the zone. You may also have to keep the state on separate disks, as IO costs quickly start dominating. The benefit of that approach over rollups (which also create separate state trees and execution) is that composability is still relatively easy, though it can require a higher gas cost to be executed with a lock on multiple zones. If you look at the execution graph of such a model, it’s pretty straightforward: a set of parallel execution traces join at one point, followed by a single execution thread for cross interactions. Unlike arbitrary parallelism, this type of model (which minimizes joins) is not terribly hard to special-case within a rollup.
Moreover: computation / IO quickly stops being the bottleneck and bandwidth takes over. Rollups, as envisioned, compose with sharded data availability whereas most sharded execution proposals do not.
So, as to your question, why not just shard your blockchain? Well this is sharding, with fewer steps. Rollups + data-availability sharding are substantially less complicated than most sharding proposals. Some sharding proposals are less complicated, but they do not typically scale data-availability.