Baking without executing any smart-contract

This is just a short explanation of the benefits that can be gained in the baker now that all transaction fees have to be paid from tz[123] accounts.

First, let me describe the lifecycle of a transaction in the current system.

How the system works now

1. An operation is created

An operation is created by the client and broadcasted on the peer to peer network

2. The operation is received by a node

The node receives and parses the operation. If it contains a so-called “managed operation” – a bundle of sub-operations such as transactions and originations – it checks that these operations are correctly signed and can pay for their announced transaction fee. This check is made in a virtual context consisting of the state of the chain as of the last block after all the existing transactions in the mempool have been executed. If a transaction can pay for its fees, then it is a valid transaction. This is true even if the transaction ends up exceeding its gas limit: that just means the transaction is implicitly turned into a no-op. But, if it can pay for its own fee, by definition, it’s valid … if it goes over the gas limit, it will be treated as a dud, but a valid dud. This prevents DOS attacks, such as the one which happened in Ethereum in the attempted DAO soft-fork.

4. The transaction is executed

The transaction is executed and the virtual context is updated. Further incoming transactions will be executed assuming this transaction was. In particular, this means that the fees that pay for a further transaction may come from a transaction that happened earlier in the mempool.

5. The baker starts preparing a block.

Ideally, the baker would grab the entire mempool, but the entire mempool may exceed the block size limit or the block gas limit. Thus, a subset of transactions has to be chosen. Ideally, the baker wants to maximize the total amount of fees he will be paid while responding to the block size constraint and the gas constraint. The problem is known as multi-constraint knapsack and is known to be NP-complete although efficient algorithms exist in practice.

But not so fast… the multiple knapsack problem assumes that the selection of every item can be made independently so long as the constraints are met. However, the mempool may involve subtle dependency graphs. A transaction may depend on another transaction in order to be valid if the previous transaction paid for its fees. The dependency can be even more labyrinthine than that… removing a transaction may affect the state of a contract which may cause another transaction to exceed the gas limit which may cause it to fail crediting another account with enough tez to pay the fees in a third transaction.

The current solution is to heuristically pick attractive transactions (large fee given their gas and size) and add them one by one to the block. This means they are now re-executed in a new “block” context. In general, there are very few such dependencies and things should just work but, since it’s not guaranteed, transactions have to be re-executed. One might naively think that this block context and the mempool context could be one and the same, by forcing the mempool to reject operations once its size and gas consumption exceeds that of a regular block. However, this would let cheap transactions that arrive in the mempool first crowd out normal transaction and prevent a fee market from arising.

6. The baker publishes the block

The block is published on the network. In general, the baker does not need to re-executed the block at this point, it’s already aware of the context. However, should another block end up at the head (a so-called “steal”), then the transactions have to be executed a third time.

How the system could work

Fundamentally, there should be no need to execute a transaction to bake. Transactions are valid or invalid depending on whether or not they can pay their fee. The issue stems from the fact that the ability to pay the fee can depend on the presence of other transactions.

To address that, we deliberately restrict the baker with the following constraint: the baker will only not count any increase in the balance of a tz[123] account in the mempool towards its ability to pay fee. Rather, we create a simplified virtual context where fees are deducted from the balance of a tz[123]. This could mean that the mempool might ignore transactions from a tz[123] account with high fees because that account has already been depleted through a series of less attractive transaction, but:

  1. that’s on the tz[123] address owner, it’s not like a third party can cause their transactions to be ignored
  2. the right way to deal with this is to introduce a proper replace-by-fee mechanism, which is, in my opinion, a reasonable thing to have but not a particularly high priority.

This system has the property that any subset of the mempool, in any order, is a valid block. This can be known without executing any of the transactions. Of course, the baker must still verify that a transaction parses, and that the signature is valid, and that the fee can be paid, but no smart-contract needs to be executed at all.

This may seem a bit restrictive at first, as there are potentially valid sets of transactions that bakers will never consider. However, there is no particular use case or rationale for why those sets should be important. Transaction fees are not expensive to the point that they waiting for a single block would represent a major capital lockup.

The workflow now looks like this

  1. Client sends transaction
  2. Mempool receives transaction
  3. Node ensure it parses, is signed correctly, and the tz[123] address paying for it can pay for the fees
  4. Baker grabs the most attractive transactions in the mempool and shoves them in a block (following some heuristic or pseudo-polynomial solution to the multi-constraint knapsack problem)
  5. Baker publishes the block
  6. Baker sees that its own block is (at a minimum temporarily) the head of the chain and executes its content

I’ve not discussed another change that this might require. Currently, each block commits to the hash of the context as it is after the execution of the block. This is theoretically still possible, as the baker could compute that hash before publishing their block and then publish their block. They wouldn’t really need to re-execute it when it becomes the HEAD of the chain. However, it may be simpler to enforce that blocks commit to the hash of the context as it is before the execution of a block (or, equivalently, after the execution of the previous block).

Why did this require preventing smart-contracts from being “spendable” and able to pay transaction fees? Because a smart-contract’s balance can be reduced as a side effect of a transaction, whereas a tz[123] balance can only be reduced with a straightforward transaction. It would be impossible to be sure that a smart-contract can pay for its fees without executing all of the transactions in the mempool.

In conclusion

Ensuring that only dead simple tz[123] accounts opens the door to an important optimization in the mempool and the baker that ensures that a smart-contract never needs to be executed more than once.


Quick note:

One approach is to track all spends (fees or transfers) from a given implicit account and deduct them from the balance to ensure that these transactions can be included. Unfortunately, these transactions may still depend on each other due to counter increases.

One solution would be to require that the counter in signed transactions merely be greater than the contract’s counter, thus eliminating the dependency (though this requires keeping those transactions in a given order in the block).

Another solution is to allow only one operation per implicit account per block. This might seem restrictive, but considering that a user can easily use multiple implicit accounts and can produce batch transactions, it isn’t actually a big limitation on the system.

Will projects like DAOBaker be affected (still be possible) through this change?

It’s not particularly clear from the description what this project does but it doesn’t really seem related to this topic.

1 Like