mithril
mithril copied to clipboard
Move GCP Aggregator to 'preview' network
This PR upgrades the Aggregator hosted on GCP cloud so that it uses the preview
Cardano network instead of the legacy testnet
:
- [x] Upgrade Cardano node to
1.35.3
- [x] Parametrize the
docker-compose.yaml
file so that it can be used with different network - [x] Update the
poolIds
of the Signer nodes (whichpoolIds
are selected among the SPOs on thepreview
network) and reduce their number to2
- [x] Use
preview
network withNETWORK=preview NETWORK_MAGIC=2
in theaggregator.tf
file - [x] Update documentation
After merge, manually duplicate stores data so that we don't need to wait 2 epochs before signing
Relates to #457
Unit Test Results
7 files ±0 22 suites ±0 1m 28s :stopwatch: -43s 293 tests ±0 293 :heavy_check_mark: ±0 0 :zzz: ±0 0 :x: ±0 294 runs ±0 294 :heavy_check_mark: ±0 0 :zzz: ±0 0 :x: ±0
Results for commit adbfbd49. ± Comparison against base commit e0916d3d.
:recycle: This comment has been updated with latest results.
There is a CardanoNetwork
enum in mithril-common used in BeaconProvider
& CliObserver
. IMHO the enum variant CardanoNetwork::TestNet
should be changed to Preview
as well.
There is a
CardanoNetwork
enum in mithril-common used inBeaconProvider
&CliObserver
. IMHO the enum variantCardanoNetwork::TestNet
should be changed toPreview
as well.
Actually, the parameters used with the commands to the cardano-cli
will depend on the CardanoNetwork
type.
IMHO it's better to have the CardanoNetwork::TestNet
handles a Test Network
which can either be:
-
testnet
with magic id1097911063
-
preprod
with magic id1
-
preview
with magic id2
-
private
with any magic id
I have added a commit that implements this behavior and also enhances the computation of the CardanoNetwork
from a config in the Signer/Aggregator.
Let me know if that works for you :slightly_smiling_face:
Seems like this is just fine, can we merge it? Given the time it can take to register SPOs, the sooner the better
Thanks a lot @jpraynaud. I understand there's manual work to be done for the deployment, definitely can wait tomorrow morning :)