Activation of Hangzhou Protocol between December 3 and December 4, 2021

Dear Bakers,

The activation of Hangzhou will take place between Friday 3rd and Saturday 4th December (Hangzhou’s 1st block will have level 1,916,929) – the current estimate is Dec 4, 2021 01:09 CET.
This means that the network will officially adopt its 8th amendment, and the fourth in 2021 (here is a list of the new features of Hangzhou).

There are several important points to take into account to be ready for activation:

  1. In order to continue to bake and endorse blocks with Octez, you must absolutely update to v11.0, as Hangzhou cannot run on versions prior to v11, and launch the tezos-baker-011-PtHangz2, tezos-endorser-011-PtHangz2, and tezos-accuser-011-PtHangz2 daemons. It is fine to run both Granada and Hangzhou2 daemons at the same time – in fact, this is the recommended way to proceed. The Granada daemons will become non-functional as soon as the new protocol is activated, and can safely be stopped once Hangzhou is live.

  2. One of the new features in Hangzhou is context storage flattening, which allows improved access to the Octez node’s context, in order to increase overall chain speed, and make the Octez node more efficient. The activation of this feature requires a migration of the full context storage, which will trigger right after the activation of Hangzhou. This process will take time as it involves a rewrite of the whole context (taking 20-30 minutes for full nodes and 2-3 hours for archive nodes). For full and rolling nodes, the latter times can be reduced by using a recent snapshot in the last cycle (from December 2nd 2021) to speed up the process: the less blocks stored in the context, the smaller is the latter’s size, and the shorter will be the required migration time.

Here is a detailed procedure to import a fresh snapshot before the activation of Hangzhou:

  • Download a recent snapshot, or export one from a running node: tezos-node snapshot export --block head~2

  • Without stopping the current node, set up a new one whose data will be stored in a different directory, here named tezos-new-node: tezos-node config init --data-dir ~/tezos-new-node

  • Import the recently generated snapshot into the new node: tezos-node snapshot import /path/to/the/full/snapshot --data-dir ~/tezos-new-node

  • Launch the new node (be careful, and if port 9732 is already taken, launch the node on other port, like 9733): tezos-node run --net-addr 127.0.0.1:9733 --rpc-addr 127.0.0.1:8733 --data-dir ~/tezos-new-node

  • Then, check that you have no imminent baking/endorsement slots, stop your old node, and immediately restart the baker, endorser, and accuser daemons on the new node, both for Granada and Hangzhou2:

tezos-baker-010-PtGRANAD --endpoint http://127.0.0.1:8733 run with local node ~/tezos-new-node/ baker_alias
tezos-endorser-010-PtGRANAD --endpoint http://127.0.0.1:8733 run baker_alias
tezos-accuser-010-PtGRANAD --endpoint http://127.0.0.1:8733 run
tezos-baker-011-PtHangz2 --endpoint http://127.0.0.1:8733 run with local node ~/tezos-new-node/ baker_alias
tezos-endorser-011-PtHangz2 --endpoint http://127.0.0.1:8733 run baker_alias
tezos-accuser-011-PtHangz2 --endpoint http://127.0.0.1:8733 run

You are now ready for Hangzhou!

Feel free to reach out, if you have any questions and/or need special assistance.

2 Likes

v11.0 does not compile on Debian. Shows an error message when trying to compile bls12-381-unix 1.0.1 package. Tried everything, nothing worked. I’m worried ;(

Well, I decided to download binaries. They are running (v11)… Lets see how it goes… Thanks anyway!

Can you open an issue with more details about your setup and whether you are compiling from scratch/using binaries or docker containers?

1 Like

My 8G-RAM baking machine is not good at generating snapshots. I tried yesterday to generate a rolling-mode snapshot based on the head block, and killed it after it ran for over 2 hours and increased to about 2.5G of RSS having written about 30000K of context (per its output). So in my case I’m better off downloading a snapshot.

Downloading a rolling mode snapshot from xtz-shots.io took about 10 minutes. Importing that snapshot took 50 minutes. Syncing the node with that (about 15 hours behind) took another 45 minutes.

1 Like

Dear GermanD, thanks for your reply. My node runs on Debian 10. Since I started my node, three years ago, I have always compiled from scratch. Never had major issues. However, when compiling v11 from scratch, I went into this error. My system has bls12-381.1.0.1 package installed, but it is not able to build bls12-381-unix.1.0.1 package. When I run make build-deps it fails only on the compilation of this. I noticed that it tries to build from opam-switch/source to opam-switch-build folder (it creates a new bls12-381-unix.1.0.1 folder inside opam-switch/build) then it fails.

Regarding the migration of the full context storage requirement I think it is not very clear what should be migrated to and from. I have my node already in rolling history mode. So do I need to migrate anything? Please, guys, clarify this as it’s a very important issue.

@Milfont, The Hangzhou2 protocol changes the structure of the Tezos context, flattening certain nested indexes. The result optimizes storage use, but it requires to translate the whole context storage between schemas – the migration in the quoted phrase. This will happen automatically during the activation of Hangzhou, after the last block of the Granada protocol, which still uses the current schema, and before the first block of Hangzhou2 – which will use the new schema.

This will be, again, an automated process, that requires no input from the user.

However, it will take some time: from 20 – 30 minutes for rolling and full nodes to 2 – 3 hours for archive nodes.

The procedure in this post (for rolling/full nodes) aims at reducing this slowdown by making sure the context storage is as lean as possible, by importing a fresh snapshot close the end of cycle 427.

There is more information available:

1 Like

Indeed, I read now again, and you did say compile. Please open an issue with this information.

1 Like

I hope someone finds this data useful.

Snapshots

I ran a snapshot export/import benchmark on my baking machine. I don’t recall the exact hardware stats of the machine, but it is several years old and has a 1 TB SATA SSD and 8 GB RAM.

Exporting the snapshot took about 130 minutes:

$ time tezos-node snapshot export --block head~2 --rolling
Dec  1 14:16:25.336 - node.snapshots: exporting a snapshot in rolling mode, targeting block hash
Dec  1 14:16:25.336 - node.snapshots:   BMVEdLfifE9q4TfAKF6zPY6eYHJkDGYUhgp17E65jPhi62VP3Eg (level: 1909963)
Copying context: 50698K elements, 4154MiB written Done
Dec  1 16:26:37.412 - node.snapshots: successful export:
Dec  1 16:26:37.413 - node.snapshots:   TEZOS_MAINNET-BMVEdLfifE9q4TfAKF6zPY6eYHJkDGYUhgp17E65jPhi62VP3Eg-1909963.rolling

real    130m14.261s
user    28m18.218s
sys     25m4.182s

Importing the snapshot took another 46 minutes:

$ time tezos-node snapshot import 1909963.rolling --data-dir tezt/
Dec  1 16:31:19.012 - node.snapshots: importing data from snapshot
Dec  1 16:31:19.012 - node.snapshots:   1909963.rolling: chain TEZOS_MAINNET, block hash BMVEdLfifE9q4TfAKF6zPY6eYHJkDGYUhgp17E65jPhi62VP3Eg at level 1909963 in rolling (snapshot version 2)
Writing context: 50698K/50698K (100%) elements, 4155MiB read Done
Storing floating blocks: 120 blocks wrote Done
Dec  1 17:17:45.315 - node.snapshots: successful import from file 1909963.rolling

real    46m26.819s
user    33m14.182s
sys     13m41.108s

I did not run the node to let it catch back up with the network, but since it would have been about 3 hours behind at that point, I estimate it would take about 10-15 minutes to sync.

Migration

As for the context storage migration, I had seen a few benchmarks using really high-end systems with virtually unlimited resources, so I tried it out on intentionally low-end hardware to see how long it would take.

I ran a test migration on a freshly imported rolling snapshot of mainnet onto a Raspberry Pi 4B with 8 GB of RAM and a 256 GB microSD card (i.e., a really slow storage medium). The migration took 86 minutes, or about 1.5 hours, on that hardware. So if you have an SSD or NVMe drive, I think you could expect the migration for a similarly sized context to complete much faster than that.

real    86m16.189s
user    0m1.821s
sys     0m0.390s
1 Like