titanoboa
titanoboa copied to clipboard
feature: deployment logs
log things about deployments (e.g., what is currently being printed in boa/network.py, but maybe also contract / compiler_data) so that interrupted deployments can continue. probably keep it in a leveldb database.
boa needs a way to cache deployments such that:
- source code hash as key
- value pair is (tx_hash, contract address) tuple
if rpc borks but we have tx hash, in the next run only fetch contract address iff source code hash is the same.
if source code hash changes, just do a new deployment
@charles-cooper Should we reuse the existing DiskCache
mechanism for boa deployments? Or do we really need leveldb?
Do we need a separate setting for caching besides the existing set_cache_dir
@bout3fiddy ?
The key should be at least (chainid, vyper integrity hash)
and value (tx_hash, contract_address)
.
The implementation currently in #240 works as following:
- The user may give a
deploy_id
argument when callingload(s)
ordeploy
- This argument is user supplied and may contain whatever int/string they choose
boa.load("contract.vy", *args, deploy_id=1)
boa.load("contract.vy", *args, deploy_id=1) # this will be cached
boa.load("contract.vy", *args, deploy_id=2) # this will not be cached, different deploy ID
boa.load("contract.vy", *args2, deploy_id=1) # this will not be cached, different constructor args
boa.load("contract2.vy", *args, deploy_id=1) # this will not be cached, different source code
boa.env.set_chain_id(11155111)
boa.load("contract.vy", *args, deploy_id=1) # this will not be cached, different chain ID
- I believe this is easy to use because it leaves the responsibility for the user to define their own deploy ID (i.e. cache_key)
- Given Python is a Turing complete language, we should not attempt to analyze the user's code flow/execution order
- In general, users might want to cache all deployments to the same network
- It should be fine then to use
deploy_id=1
for all deployments, unless they want to deploy the same contract with the same arguments twice to the same network - To reset the cache and restart all (or some) deployments the deploy ID may be updated accordingly
- It should be fine then to use
- If no cache is wished, simply do not pass any deploy ID (or use a random one)
Alternatively, we could also set the deploy globally so the user doesn't need to override it every time, e.g.
boa.env.set_deploy_cache_id(1)
boa.load("contract.vy", *args)
boa.load("contract.vy", *args) # this will be cached
with boa.env.anchor_deploy_cache_id(2):
boa.load("contract.vy", *args) # this will not be cached, different deploy ID
I think we will also need some sort of a way to save these deployments into a json file.
Usually I need to pass these deployments across to other service providers so they can enter it into their config file.
Maybe:
boa.env.dump_cache(deploy_id=1, filename=filename)
yea, will be useful to get human readable thing.
maybe something that will help is if the deployment log is not stored in a global database, but a database in the current directory. that way there are no weird conflicts between repos (and the deploy log is easier to find too!)
@charles-cooper are you now on board with this solution? Or what do you suggest?
Moving it to the application folder + adding a dump/import should be fine. For that purpose, JSON format might be easier to handle than sqlite.
@charles-cooper are you now on board with this solution?
no i don't think so -- false cache miss is OK, but false cache hit can be catastrophic