For people who know about the “failing_noop
”, there is a kind of obvious question here, which is ignored by the TZIP: why not use failing_noop
as it was ostensibly designed for this?
I could try to explain why I think the TZIP’s idea is better, and I might do so later. I would like to learn more about current tool behavior before I try to do that, though, since to me the most convincing argument for failing_noop is that it is “already supported” in some sense by tools.
Right now, I just want to give a brief description of the failing_noop for others who might consider this question.
Here, I think, is the intended failing_noop schema:
0x03<block_hash>0x11<len><message>
The 0x03 is the Tezos generic operation magic byte, which by some strange logic is being used (with 0x11) to indicate that the message is not a Tezos operation…
No one has ever specified how you are supposed to set or verify the <block_hash>
. I think tezos-client sign message
sets it to the current head block hash or to whatever the user specifies. I think there was something somewhere (forget what) suggesting to use the genesis block hash. The original application of failing_noop (in the rejected “Florence + baker account” protocol amendment) used zero bytes… The implementer of failing_noop has suggested to me in private that this question be left up to the application – I am not very happy about that idea, but it could work.
The 0x11 is the id (17) for the failing_noop operation kind, which is defined in the protocol to always fail with an error.
The <len>
is the 4-byte big-endian length of the following message bytes.
The <message>
is arbitrary bytes. Or is it? The Florence + baker account usage of failing_noop used a 4-byte tag prefix, which was presumably the hash of some description of the message type. This convention was never documented anywhere publicly. The developer of failing_noop has recently proposed using a single “dapp-specific magic byte” as a prefix in the message.
Two notes for now:
It’s not completely clear whether the above schema is really the only failing_noop schema. E.g. one can craft an operation containing two failing_noops (that is why the message length is there) – should these always be rejected by validators?
Additionally, it’s unclear whether the schema might change over time if the implementation details in Tezos change. Neither answer seems attractive – we don’t want the schema to change over time, but if it does not change with Tezos then it will seemingly no longer achieve its design goal, of being a Tezos message which means “this is not a Tezos message.”