The concept is sound. Decentralizing liquid staking away from single intermediaries is a real security goal, and making it governance-neutral (no voting power from liquid stake) is the right call. But after reviewing both the Agora post and the actual implementation in GitLab, several things stand out.
1. The implementation is further along than this thread suggests.
The CLST (Canonical Liquid Staking Token) has been under active development in proto_alpha since at least February 6. Delegate parameter modules (Clst_delegates_parameters_repr), registered delegate storage, baker registration and parameter update logic, FA2.1-compliant token entrypoints (export/import tickets via export_ticket and import_ticket), ticket balance RPCs, and tests. Multiple NL engineers have been committing through February and March, with batches merged to master on March 3 and March 9.
The “heads up” post went live on March 6. A month of active implementation preceded the community’s first look at this feature.
This means key design decisions have already been made. Baker registration works a certain way. Token transfers work a certain way. The ticket mechanics are built. These choices encode economic assumptions that the community hasn’t seen, discussed, or validated.
2. The economics are still a black box.
The code shows bakers register with parameters (fee, capacity ratio) via register_delegate and can update them via update_delegate_parameters. What the code doesn’t answer, and what the Agora post leaves as “work in progress”:
- What are the bounds? The Baker Fee Maximum, Baker Allocation Maximum, and Global Allocation Maximum are undefined. These are the guardrails that determine whether this system is sustainable or a race to the bottom on fees.
- How does the protocol distribute stake across registered bakers? The allocation algorithm is the single most important economic variable in this design. Is it proportional to capacity? Weighted by fee? Round-robin? This isn’t a minor detail. It determines equilibrium baker economics.
- No economic modeling has been shared. No simulation data. No sensitivity analysis. A feature that reshapes staking dynamics across the entire network should come with numbers, not just architecture.
3. Slashing socialization is confirmed and needs parameters.
zaynah confirmed in this thread that slashing losses are socialized across all sTEZ holders via exchange rate adjustment: “the loss is shared proportionally across the system through a small adjustment of the exchange rate.” The eligibility gate (clean slashing history) reduces probability, but an eligible baker can misbehave after registration. Responsible bakers’ stakers subsidize the losses caused by irresponsible ones.
The slashing history lookback window and severity curve are not specified. These define the risk profile of holding sTEZ and need to be published before anyone can make an informed evaluation.
4. The ecosystem impact deserves honesty, not framing.
Stacy (stXTZ) is live on L1 and Etherlink. Youves is the largest DeFi protocol on Tezos by TVL. Both are actively building and growing. Saying enshrined liquid staking is designed to “complement, not compete” doesn’t hold up when you’re offering the same core service with zero counterparty risk and protocol-level trust. That is a structural advantage no third-party project can match.
If the protocol is going to compete with its own ecosystem, say so plainly and explain why the security benefit outweighs the ecosystem cost.
5. Process.
Features with this level of economic impact need to be socialized before they’re built, not after. The two-phase rollout (feature-flagged in U, activated in V) is responsible engineering, and the testnet-first approach is the right way to validate. But the community should be evaluating the design alongside the development, not reviewing a finished implementation presented as an open question.
If Tezos is going to keep pointing to on-chain governance as its defining advantage over every other chain, then the governance process needs to actually produce the work product that justifies that claim. That means economic models published before code is merged. That means parameter discussions happening in public, not in MR comments. That means the community evaluating tradeoffs with real data, not reviewing finished implementations wrapped in the language of open questions. We’ve done 20+ upgrades with zero hard forks. That’s the flex. But the flex only holds if the process is real, and right now, the process is trailing the code.
I don’t think anyone is asking for Cardano-level work product, just something thoughtful enough that bakers and stakers can evaluate what they’re voting on before the code is already written. A one-pager with the proposed parameter ranges, a basic sensitivity analysis, and the allocation logic. And it’s not just about transparency for the community. The process of actually modeling these things forces you to stress-test your own assumptions. Engineers and architects regularly reconsider their approach when they sit down and run the numbers. That’s not overhead, that’s how you catch bad design decisions before they hit mainnet.