It’s not clear what you’re proposing. Is
TRANSFER_WITH_STAMP the same as transfer tokens? So you’ve transferred stamped bytes, now what? Can they be retransmitted? If not, what’s the point, if yes, what’s to prevent them from being transmitted twice?
It’s not clear what you’re proposing. Is
I like preventing the duping of values contained stamped_values best, but if we don’t want that complexity (something something linear type) here are two other options:
Add an existing
RIPinstruction which “rips” a stamped value. The stamped value can still be passed around and everything, but it can only be ripped once. RIP could either FAIL if called on an already ripped value, or perhaps return a boolean: true the first time, and false thereafter. During the execution of a transaction, a global map of all stamped_value IDs is kept by the interpreter, across different invocations, to keep track of which ones have been ripped and which ones haven’t. To be clear, this isn’t a Michelson map, it’s a global value maintained by the interpreter for that transaction.
Put the onus of enforcing non replayability on the contract receiving the stamped value by having it maintain a counter.
I’m not crazy about 1 because it creates a shared global state between different contracts as they get executed. I can’t think of anything bad that would happen, but it’s not good practice to do such things.
I’m not crazy about 2 because it would hand developer a powerful footgun.
Suppose I have a
stamped_value (pair (address %from) (address %to)).
CAST it to a
stamped_value (pair address address), and then to a
stamped_value (pair (address %to) (address %from)) (or whatever)?
Or, maybe, the definition of “compatibility” of types (as used in
CAST and elsewhere) will have a special case for
stamped_value, enforcing a stricter notion of compatibility for the stamped type?
I’m less familiar with how annotations are used inside Michelson, and I know that they have evolved in importance. If I had to pick one, I would not let this be compatible, but I don’t think it’s a big issue if it is compatible.
I suspect you’re worried about confused deputy attacks but I think that, in general, the address of the intended recipient would almost always be a part of the stamped value.
François, the SmartPy creator pointed out that “non dupable” would not allow these to be extracted from pairs. This probably isn’t an issue if we have a native
He proposes, alternatively, that the contract simply fails if multiple copies of a stamped_value are present in the list of passed operations.
He also suggests that not building in replay protection in stamped_value and promoting good patterns for their consumption would work as well (and let stamped_value be storable).
Just spent a few hours prototyping, doesn’t build yet, but it is available here.
The current choices I have to make:
1. There is no concept of “type of values that can be passed around in parameters, but can be created only by smart-contracts” in Michelson. Without one, users could build fake stamps (because a user can create all the values that a smart-contract can). So, we would need to add one.
This can be avoided if stamps are not passed around, but only references to stamps, stored in some other place.
So I need to make a choice there. Both make Michelson more complex.
2. There is no concept of “local effects” in Michelson. Effects in Michelson are kind-of opaque, because they can only affect
context, a catchall value for all effectful functions in Tezos.
step in Michelson has this type:
step : type before after. context -> (before, after) code -> b stack -> (a stack * context).
context has for instance a variable holding the remaining gas.
I would like to avoid adding more to this. So, I have another to choice to make:
- Working on adding some auxiliary type capturing local effects
- Using some other solution like the free monadic interpreter, being worked on at NL
Suggestions are welcome!
Can we think about this in terms of migration paths?
I feels like the simplest solution to implement is one where handles are passed around and point to a structure that’s global to the execution of the transaction across multiple operations, with opcodes to read and write from that table. I would suggest no deallocation, just a seal on each value that can be intact or broken (or a counter).
It’s not as elegant as passing values around, but…
Can we think of a way to implement the pointer approach, that would be transparently upgradable to the fancier version?
Or do both version require an equal amount if sweat to integrate in Michelson?
That is the preliminary conclusion I am reaching in the previous message.
Do you mean this user-wise or implementation-wise?
If both variants are equally painful to implement, that doesn’t matter and one what might as well start with the cleanest version.
Still working on that cleanest version. It is essentially about making a more local state monad in
script_interpreter.ml . A more local error monad will also naturally follow.
Just so you know I think Raphaël C is taking a crack at it as well… the variant where the stamped values can be duped but the transaction fails if a sealed value is passed more than once or to more than one contract in the list of transaction returned. This was the solution suggested by François and consensus seems to be forming around it. There is already a runtime check to ensure that operations are not duped, it’s a somewhat reasonable extension to ensure that duped stamped values aren’t passed either. The static benefit that we would get from making stamped values into linear types can be easily replicated by running a static analyzer on the code.
Dan Robinson pointed out a hackish way you could get this without a protocol change.
A contract could store a large table of “contract signatures”. Any contract can call it and write a message on it, and pass around a handle to that message. It’s clunky and requires a bunch of callbacks but it works.
To be clear, I don’t recommend this approach, just good to know it’s a possibility.
Had a long talk with the Agoric team, this made sense as contract signatures is very ocap-ish.
They pointed out that these “contract signatures” can do a lot more than what I described above. For instance, the pattern I’ve described for a token contract went as follow:
- The contract registered as the owner of some amount of XYZcoin in the XYZ token contract creates a “signed” message asking for the transfer of n XYZ coins to a dex.
- This message is passed to the dex which then uses it to retrieve n XYZ coins.
Here’s a far better pattern:
A contract (or some account) asks the XYZcoin contract to create an account for it, the XYZcoin contract creates an account entry, signs it, and sends it to the contract. This storable value is now a key that represents the right to access the XYZcoin contract. It can be kept as is, passed to another contract, etc.
The contract calls XYZcoin, and passes its account key as a parameter… there is no need to rely on the SENDER opcode.
The contract requests that XYZcoin gives it a sign receipt for “n XYZ coins”. The XYZ contract debits n coins from the balance of the requester and returns a “signed” receipt for n coins. This is now a portable representation of the coins that can be passed around between contracts.
The contract sends the receipt to the dex, which can store it, cash it out, send it to another account, etc.
This basically replicates the experience of having assets as first class. Coins from token contracts can be sent just like tez, without expensive callbacks.
In fact, some (maybe not all) token contracts can completely dispense with the notion of mapping accounts to balances, and simply issue signed notes, stored by other contracts on the network, as well as a service to convert a list of notes into a list of other notes with the same sum of balances. As is, this would clash with implicit accounts, but there’s likely some way around it.
In the limit, it starts looking like the asset model that was (once) part of the code-base, where an asset is just a pair between a nat and a contract address, and can be automatically cut up, etc.
Why would it “clash with implicit accounts” ?
Because this part?
Why not limit it to KT1 contracts then?
The MR implementing tickets.
You can see the more technical details there:
Me and my prof. Cesar Sanchez have been looking into Tickets and we think using them may jeopardize scalability:
Since the ticket type in Michelson represents a handle to a ticket stored in an internal table shared by all contracts, the effect of the tickets operations modify the blockchain global state.
As we understand, the operations PUNCH and OWN must have immediately visible effect on other contracts, therefore if two contracts try to PUNCH the same ticket there will be a race condition, preventing independence of execution among remote smart contracts which does not seem to be compatible with sharding.
The current implementation is meant to approximate a linear type. Using a linear type would not cause trouble with sharding if I’m not mistaken. I think if or when that happens, better tickets can be introduced.
I agree that tickets can be implemented correctly with sharding using linear types to model the necessary concurrency control (they could also be modelled correctly with other concurrency control mechanisms). I think that the point mcapretto wants to raise is that since operations now have global effects (beyond storage changes and further oprations) due to ticket operations, one must be forced to serialize contract calls.
My point is that it would be relatively straightforward to deprecate the old ticket system and introduce one based on linear types if and when sharding happens. Old style tickets would still be usable but only when no cross shard calls are involved.
I see. Linear types is a way to statically determine which operations are independent of which other operations (and can therefore be run in parallel). That is what I mean by "Ideally, if one knew the effects of all sub-operation generated by a contract many call sequences (whose semantics are sequential) could be run in parallel." in Concurrency, BFS vs DFS (and a proposal)
What I propose in that thread is to be optimistic and run in parallel, check after the execution whether there are concurrency conflicts, and re-run if necessary. This technique is simple and works both for BFS and DFS.