massa
massa copied to clipboard
Allow to run 'scheduled' operations
Problem
When we want to send several operations
/ transactions
that needs to be executed in a specific order, we need to wait that the previous operations
ID reached the final
status.
Usecase Example
This leads us to a have deployments that last very long time, for instance :
- As a developer, I want to deploy a website of 1Mo approximatly ( Tic Tac Toe for example )
- As the zipped website
build
size is higher than the block size limit => We need to split the Zip in several file. - On the first insertion we are using
setStorage
, on the following ones ,appendData
- As we need to wait for the
FINAL
status to ensure data integrity between each operations, the insertion of all chunks can be long ( 40 min in this case )
That would boil down to adding a new feature:
- add a "dependencies" field to operations that lists operation IDs that need to have been executed before so that the ops depending on them become executable
- we already store a cache of executed operation IDs, but only during their validity time. This is used to prevent operation reuse. We could use this cache to verify that the requirements of an operation are satisfied, and if they are not, we just ignore the operation with unsatisfied requirements
- edge case: operations whose validity time starts after the end of the validity time of any of their dependencies will never be executed because their dependencies will be dropped from the cache. Even if we increase cache storage beyond operation validity expiration, we can't extend it indefinitely, and this limit will always exist.
- performance impact: more checks at runtime (every check is executed more than 10000 times per second)
- performance impact: operation size will increase by adding this new field (even if it is optional) => more network and deserialization weight, less operations per second
We need to qualify this requirement given the possible negative impacts, and fully validate that the edge cases are acceptable and intuitive to developers.
Another thing we need to do is to work around having to upload megabyte-sized websites, because the blockchain is simply not suitable for that. We should host our frameworks on the blockchain and avoid re-including them inside every website. We should also avoid storing heavy assets like images and videos on-chain. A pruned, minified website using an already-stored framework (massa-web3, and possibly CSS as well, like bootstrap.css), without images or videos, and zipped, has no good reason to be heavier than a couple kilobytes
To me, the best option here is to make websites smaller (as they are meant to be). That being said, having this feature (as well as an "override" field preventing a list of ops of the same address from being executed AFTER the op containing the list) will be useful for other purposes such as natively ordering keyboard commands sent in a videogame for example, or more cleanly cancelling pending operations
@massalabs/core-team any comments ?
So what I understand is that it's a bit of work.
Totally agree with the fact that :
- Images / videos etc should not be included in zip but needs to be linked with a web URL ( or permanent storage for decentralized solutions )
- Same for front libraries, it's prefeable that builders use
CDN
After reflexion, I think that we can do something diferent for usecases that remains big :
- Don't store all websites in the same Storage key, use different keys ( for instance massa_web0, massa_web1 ... massa_webN) with N +1 = nbr of chunk
- Create a SC logic OR a backend to merge the chunks for rendering
This would prevent to wait the end of each chunk writing in the Storage
and enhance the User experience