fuel-specs
fuel-specs copied to clipboard
Add `SS` and `LS` intructions.
There is an existing problem with using union types (i.e: Identity) in storage.
For the Identity example, we need to store 2 slots; one for the tag and one for the b256 value. This is quite wasteful/expensive, and the cost grows even more if we consider storing/reading a struct with multiple fields.
We have an open issue we proposed one way to solve this for the specif case of the Identity type. It is very limited in scope as it doesn't unlock any other use-cases beyond a specific type.
It has been suggested by @adlerjohn that another solution to the problem of expensive storage for enums & structs would be to add instructions to allow reading & writing n consecutive storage slots in a single operation.
So, for reading 2 storage slots, this would be slightly more expensive than reading one slot, but not twice as expensive (I would expect the savings to increase linearly with respect to n, at least to a point).
We could add "Store Slots" & "Load Slots" (or SNS/LNS` "Store N Words" & "Load N Words") instructions to allow this.
Something along these lines:
SS: Store slots
| Description | The value of the $rC words starting at $rB are stored at the address $rA . |
| Operation | MEM[$rA, ($rC * 8)] = [$rB, $rB + ($rC * 8]; |
| Syntax | ss $rA, $rB, $rC |
| Encoding | 0x00 rA rB rC |
LS: Load slots
| Description | $rC words are loaded into $rA starting from $rB. |
| Operation | $rA = MEM[$rB, (`$rC` * 8)]; |
| Syntax | lw $rA, $rB, $rC |
| Encoding | 0x00 rA rB rC |
Eliminates the need for #324 cc @adlerjohn
It is definitely more efficient at the DB layer if we can do a batch multi-key get, vs individual consecutive fetches. And even better than a multi-key get, is iteration over slot keys that are neighbors when sorted lexicographically.
Each key lookup in an LSM is a logarithmic operation while iterating over subsequent keys is O(1). This is because the database creates a linked list over all the sorted keys.
Writing slots in consecutive batches would also share a similar benefit, as we read before writing. However, the write operation itself generally won't see the same level of efficiency gain. This is because the entire state diff from a transaction is stored in memory and then wholly committed as a batch afterward. So consecutive writes are automatically batched. However, this is something we can let the new benchmark framework determine if there are any major cost savings.
It's probably ok if we don't get quite the same benefit for write operations. I suspect that the majority of cases will involve write once, read many times. If we can get some savings for writing consecutive slots, that would be great though !
A couple notes:
- Use "read" and "write" instead of "load" and "store" to be more consistent with other instructions.
- Should this read/write a number of words or a number of storage slots?
@adlerjohn should we close this issue? Looks like we've opted to put this functionality into our existing quad-word opcodes rather than introducing new ones.
related: https://github.com/FuelLabs/fuel-specs/pull/422