subnet-evm icon indicating copy to clipboard operation
subnet-evm copied to clipboard

Not able to start subnet network with avalanchego 1.11.0 and subnet-evm 0.6.0

Open bhargavkacharla opened this issue 1 year ago • 45 comments

Describe the bug Existing fuji subnet is not working suddenly and transactions submitted in fuji subnet are pending since a long time, able to send read transactions, but write transactions are pending. logs from sudo journalctl -u avalanchego -f INFO peer/peer.go:956 beacon attempting to connect with newer version. You may want to update your client {"nodeID": "NodeID-NpagUxt6KQiwPch9Sd4osv8kD1TZnkjdk", "beaconVersion": "avalanchego/1.11.0"} Based on logs, upgraded avalanchego version to 1.11.0 and subnet-evm to 0.6.0 and restarted the network. But getting new error in the logs,

Feb 15 17:52:17 ip-172-31-21-192 avalanchego[15897]: ERROR[02-15|17:52:17.192] <2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS Chain> github.com/ava-labs/subnet-evm/core/genesis.go:246: found mismatch between config on database vs. new config storedConfig="{"byzantiumBlock":0,"chainId":2023032,"constantinopleBlock":0,"contractDeployerAllowListConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af","0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb"],"blockTimestamp":0},"contractNativeMinterConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"eip150Block":0,"eip155Block":0,"eip158Block":0,"feeConfig":{"gasLimit":8000000,"targetBlockRate":2,"minBaseFee":25000000000,"targetGas":15000000,"baseFeeChangeDenominator":36,"minBlockGasCost":0,"maxBlockGasCost":1000000,"blockGasCostStep":200000},"feeManagerConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"homesteadBlock":0,"istanbulBlock":0,"muirGlacierBlock":0,"petersburgBlock":0,"subnetEVMTimestamp":0,"txAllowListConfig":{"adminAddresses":["0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb","0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"upgrades":{}}" newConfig="{"byzantiumBlock":0,"chainId":2023032,"constantinopleBlock":0,"contractDeployerAllowListConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af","0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb"],"blockTimestamp":0},"contractNativeMinterConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"durangoTimestamp":1707840000,"eip150Block":0,"eip155Block":0,"eip158Block":0,"feeConfig":{"gasLimit":8000000,"targetBlockRate":2,"minBaseFee":25000000000,"targetGas":15000000,"baseFeeChangeDenominator":36,"minBlockGasCost":0,"maxBlockGasCost":1000000,"blockGasCostStep":200000},"feeManagerConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"homesteadBlock":0,"istanbulBlock":0,"muirGlacierBlock":0,"petersburgBlock":0,"subnetEVMTimestamp":0,"txAllowListConfig":{"adminAddresses":["0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb","0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"upgrades":{}}" Feb 15 17:52:17 ip-172-31-21-192 avalanchego[15897]: [02-15|17:52:17.193] ERROR chains/manager.go:361 error creating chain {"subnetID": "SEerb4ZQYMPFiSaekS1LqnJqPgYR5uPYBR9ggPjfxc1SaeBSj", "chainID": "2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS", "chainAlias": "2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS", "vmID":"ftkYV1ngCuXQC6rm8GrTkKUD6d2Qsc5kzjWKkAuysyZkiVBsS", "error": "error while creating new snowman vm rpc error: code = Unknown desc = mismatching Durango fork block timestamp in database (have timestamp nil, want timestamp 1707840000, rewindto timestamp 1707839999)"}

To Reproduce

  1. upgrade existing avalanchego from 1.10.19 to 1.11.0
  2. upgrade subnet-evm to 0.6.0
  3. sudo systemctl stop avalanchego
  4. sudo systemctl start avalanchego
  5. sudo journalctl -u avalanchego -f

Expected behavior subnet to run and start syncing blocks

Operating System Amazon EC2 Ubuntu instance

bhargavkacharla avatar Feb 15 '24 18:02 bhargavkacharla

Hi @bhargavkacharla , it seems you updated your node after the network upgrade took place, which is why you are seeing this error. You can attempt to recover your database by specifying "skip-upgrade-check": true as part of the chain configuration and restarting the node.

Note that in the future, is important to upgrade the node's software to the latest version before the network upgrade takes place, since the network upgrade may change the ruleset and if your node accepts blocks that differ from the one in the upgrade, your database will not be recoverable and you would need to sync from an empty database.

darioush avatar Feb 15 '24 18:02 darioush

@darioush where can I change the chain configuration to specify "skip-upgrade-check": true

bhargavkacharla avatar Feb 15 '24 18:02 bhargavkacharla

The default path for the chain configuration is ~/.avalanchego/configs/chains/<chainID>/config.json. The file should contain a valid json object, so if it is empty you should add {"skip-upgrade-check": true} to that file.

darioush avatar Feb 15 '24 18:02 darioush

In my machine, the path is at ~/.avalanchego/configs/chains/C/config.json. add the required field in json object and now config.json file looks like,

{
  "state-sync-enabled": true,
  "skip-upgrade-check": true
}

restarted the node sudo systemctl stop avalanchego sudo systemctl start avalanchego sudo journalctl -u avalanchego -f still getting the same error

bhargavkacharla avatar Feb 15 '24 18:02 bhargavkacharla

@darioush

bhargavkacharla avatar Feb 15 '24 19:02 bhargavkacharla

You would need to put the mentioned configuration in the chain config for your chain. Using information from your logs, the location would be ~/.avalanchego/configs/chains/2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS/config.json

darioush avatar Feb 15 '24 19:02 darioush

@darioush

there is no such path in my instance,

ubuntu@ip-172-31-21-192:~$ cd ~/.avalanchego/configs/chains/2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS/
bash: cd: /home/ubuntu/.avalanchego/configs/chains/2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS/: No such file or directory
Screenshot 2024-02-16 at 12 50 10 AM

bhargavkacharla avatar Feb 15 '24 19:02 bhargavkacharla

you would need to create that directory and add {"skip-upgrade-check": true} to config.json in that directory

darioush avatar Feb 15 '24 19:02 darioush

@darioush created the directory and added {"skip-upgrade-check": true} to config.json. getting below error now,

Feb 15 19:29:30 ip-172-31-21-192 avalanchego[16644]: [02-15|19:29:30.497] ERROR chains/manager.go:361 error creating chain {"subnetID": "SEerb4ZQYMPFiSaekS1LqnJqPgYR5uPYBR9ggPjfxc1SaeBSj", "chainID": "2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS", "chainAlias": "2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS", "vmID":"ftkYV1ngCuXQC6rm8GrTkKUD6d2Qsc5kzjWKkAuysyZkiVBsS", "error": "error while creating new snowman vm rpc error: code = Unknown desc = failed to validate state while re-processing block (0xda6e59f42f0033738e7a427abed6f7d719f66dddef465a0e7a810a76ac005028: 2111): invalid gas used (remote: 1653562 local: 1654058)"} Feb 15 19:29:30 ip-172-31-21-192 avalanchego[16644]: [02-15|19:29:30.497] INFO health/worker.go:111 registered new check and initialized its state to failing {"namespace": "health", "name": "2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS", "tags": ["SEerb4ZQYMPFiSaekS1LqnJqPgYR5uPYBR9ggPjfxc1SaeBSj"]}

bhargavkacharla avatar Feb 15 '24 19:02 bhargavkacharla

Unfortunately this means you will need to delete the node's database and restart syncing from scratch.

darioush avatar Feb 15 '24 19:02 darioush

  1. where can I do that ?
  2. since it again syncs from scratch, I can still can get older data after sync right ?
  3. how long it will take to sync? block height is 2200

bhargavkacharla avatar Feb 15 '24 19:02 bhargavkacharla

@darioush

bhargavkacharla avatar Feb 15 '24 20:02 bhargavkacharla

  • The default location of the avalanchego database is: ~/.avalanchego/db/<network name> in your case I would recommend moving the database directory, so you have a backup, eg mv ~/.avalanchego/db/fuji ~/.avalanchego/db/fuji.bak
  • You would be able to get the same data as before.
  • It could take a while (several hours) to sync the main network, the 2200 blocks on your chain would likely sync relatively quickly

darioush avatar Feb 15 '24 20:02 darioush

@darioush,

syncing for P chain but cant see the syncing for C chain. also block height was 2200 but it is syncing for higher block number

Feb 15 20:23:08 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:08.202] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 95000, "numTotalBlocks": 138378, "eta": "36s"} Feb 15 20:23:11 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:11.350] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 100000, "numTotalBlocks": 138378, "eta": "32s"} Feb 15 20:23:14 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:14.474] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 105000, "numTotalBlocks": 138378, "eta": "27s"} Feb 15 20:23:19 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:19.171] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 110000, "numTotalBlocks": 138378, "eta": "23s"} Feb 15 20:23:24 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:24.290] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 115000, "numTotalBlocks": 138378, "eta": "20s"} Feb 15 20:23:27 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:27.455] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 120000, "numTotalBlocks": 138378, "eta": "15s"} Feb 15 20:23:30 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:30.514] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 125000, "numTotalBlocks": 138378, "eta": "11s"} Feb 15 20:23:34 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:34.467] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 130000, "numTotalBlocks": 138378, "eta": "7s"} Feb 15 20:23:38 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:38.560] INFO <P Chain> bootstrap/bootstrapper.go:651 fetching blocks {"numFetchedBlocks": 135000, "numTotalBlocks": 138378, "eta": "3s"} Feb 15 20:23:39 ip-172-31-21-192 avalanchego[17100]: [02-15|20:23:39.868] INFO <P Chain> bootstrap/bootstrapper.go:711 executing blocks {"numPendingJobs": 138378} Feb 15 20:24:09 ip-172-31-21-192 avalanchego[17100]: [02-15|20:24:09.868] INFO <P Chain> queue/jobs.go:203 executing operations {"numExecuted": 22881, "numToExecute": 138378, "eta": "2m31s"}

bhargavkacharla avatar Feb 15 '24 20:02 bhargavkacharla

This is normal, first P chain will sync then C, X chains and your subnets chain.

Each chain is independent so the higher block numbers are from the P chain.

Recommended to wait for the process to complete.

darioush avatar Feb 15 '24 20:02 darioush

@darioush

  1. the logs got stopped at this, does it mean it sync is done ? it didnt sync the 2200 blocks of subnet yet
Feb 16 19:11:49 ip-172-31-21-192 avalanchego[382]: [02-16|19:11:49.330] INFO <P Chain> validators/logger.go:50 node added to validator set {"subnetID": "11111111111111111111111111111111LpoYY", "nodeID": "NodeID-GPc1Si7SQ6oAEfP8MzQ3cFN3fsAtkwC6i", "publicKey": "0x", "txID": "ANMB6Nhkapf38GLSM3hUe8rAg4naKMPpujGbdaTvKaoNiFrs8", "weight": 1000000000}

Feb 16 19:11:49 ip-172-31-21-192 avalanchego[382]: [02-16|19:11:49.340] INFO <P Chain> validators/logger.go:50 node added to validator set {"subnetID": "2J8TLHUDfWuGnEuKKG6n47rkU9YcocEtpxCAaFg447doVBYLEH", "nodeID": "NodeID-GPc1Si7SQ6oAEfP8MzQ3cFN3fsAtkwC6i", "publicKey": "0x", "txID": "xhMY41qkHYUexPpcXgNeuKF4rxrGTb9MF2ksqCGV1zeDE4C88", "weight": 20} Feb 16 19:11:49 ip-172-31-21-192 avalanchego[382]: [02-16|19:11:49.340] INFO <P Chain> validators/logger.go:50 node added to validator set {"subnetID": "SEerb4ZQYMPFiSaekS1LqnJqPgYR5uPYBR9ggPjfxc1SaeBSj", "nodeID": "NodeID-GPc1Si7SQ6oAEfP8MzQ3cFN3fsAtkwC6i", "publicKey": "0x", "txID": "q4zcy6vsHWqfKn4ycEVGrn9XVR1itpDYrHvVd6M99uw6bDTwL", "weight": 20} Feb 16 19:11:49 ip-172-31-21-192 avalanchego[382]: [02-16|19:11:49.340] INFO <P Chain> validators/logger.go:50 node added to validator set {"subnetID": "25BsTBm8GQZirHUUKoRSjed8NxjTxtnXP1FvGNanV3QRuizkdt", "nodeID": "NodeID-GPc1Si7SQ6oAEfP8MzQ3cFN3fsAtkwC6i", "publicKey": "0x", "txID": "onnhtGXvE2H6mE7EVRbTcLZTnaT84pkSPNiFFcq4trR2rz1VF", "weight": 20} Feb 16 19:11:49 ip-172-31-21-192 avalanchego[382]: [02-16|19:11:49.340] INFO <P Chain> validators/logger.go:50 node added to validator set {"subnetID": "dLyxcugrVZarrCSREN4B7idVwoRzEbP3kzyJN2BTsoRqYggN5", "nodeID": "NodeID-GPc1Si7SQ6oAEfP8MzQ3cFN3fsAtkwC6i", "publicKey": "0x", "txID": "2s6Z3cWr4jLW4cZnwGfM7t1DErfamr6DbpgPY81i5Uup4R6jXa", "weight": 20} Feb 16 19:11:49 ip-172-31-21-192 avalanchego[382]: [02-16|19:11:49.358] INFO <C Chain> snowman/transitive.go:544 consensus starting {"lastAcceptedBlock": "2wanvW19NMoDDT2jmU7EPkS4UKZ2a9p26e42h1YHpi617nnAmv"} Feb 16 19:12:02 ip-172-31-21-192 avalanchego[382]: [02-16|19:12:02.808] INFO <X Chain> snowman/transitive.go:544 consensus starting {"lastAcceptedBlock": "2HfyDoTv6uTTndAE9dvkzRQTjYRV4FuTKwCSsWy7Lf8LYtK3aS"} Feb 16 19:12:20 ip-172-31-21-192 avalanchego[382]: INFO [02-16|19:12:20.793] <C Chain> github.com/ava-labs/coreth/core/blockchain.go:1497: Resetting chain preference number=30,036,933 hash=b921f4..8c9d56 drop=1 dropfrom=fd7306..7862ae add=1 addfrom=7aa584..c562ed

Also the status of curl commands is attached, Screenshot 2024-02-17 at 12 56 59 AM

  1. This is for validator node, am running rpc node as well. does that follow the same steps, backing up db and syncing node from scratch ?

bhargavkacharla avatar Feb 16 '24 19:02 bhargavkacharla

@darioush

bhargavkacharla avatar Feb 16 '24 19:02 bhargavkacharla

  • Based on the API output, it seems your node is bootstrapped.
  • You would need to follow the same procedure on the RPC node as well.

darioush avatar Feb 16 '24 21:02 darioush

@darioush , validator node and rpc node are synced, but still I cant do any transaction, also the balances for the account is 0 but earlier there were some balance in the accounts Uploading Screenshot 2024-02-19 at 8.35.17 PM.png…

bhargavkacharla avatar Feb 19 '24 14:02 bhargavkacharla

@darioush I guess the nodes didnt sync for subnets, sync was done only for P, X, C chains

bhargavkacharla avatar Feb 19 '24 15:02 bhargavkacharla

Screenshot 2024-02-19 at 8 49 48 PM errors from chainId.log. This is from RPC node

bhargavkacharla avatar Feb 19 '24 15:02 bhargavkacharla

if you can see, getting nonce too low error and execution reverted while syncing blocks of subnet

bhargavkacharla avatar Feb 19 '24 15:02 bhargavkacharla

getting older log of config mismatch

INFO [02-19|06:58:27.675] <2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS Chain> github.com/ava-labs/subnet-evm/trie/triedb/hashdb/database.go:521: Persisted trie from memory database nodes=20 size=2.24KiB time="68.345µs" gcnodes=0 gcsize=0.00B gctime=0s livenodes=0 livesize=0.00B ERROR[02-19|06:58:27.680] <2BWGibjUxAkYcnWAJo432HBp53FE4j5CRe5bg2hV3j5mfxohnS Chain> github.com/ava-labs/subnet-evm/core/genesis.go:246: found mismatch between config on database vs. new config storedConfig="{"byzantiumBlock":0,"chainId":2023032,"constantinopleBlock":0,"contractDeployerAllowListConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af","0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb"],"blockTimestamp":0},"contractNativeMinterConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"eip150Block":0,"eip155Block":0,"eip158Block":0,"feeConfig":{"gasLimit":8000000,"targetBlockRate":2,"minBaseFee":25000000000,"targetGas":15000000,"baseFeeChangeDenominator":36,"minBlockGasCost":0,"maxBlockGasCost":1000000,"blockGasCostStep":200000},"feeManagerConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"homesteadBlock":0,"istanbulBlock":0,"muirGlacierBlock":0,"petersburgBlock":0,"subnetEVMTimestamp":0,"txAllowListConfig":{"adminAddresses":["0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb","0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"upgrades":{}}" newConfig="{"byzantiumBlock":0,"chainId":2023032,"constantinopleBlock":0,"contractDeployerAllowListConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af","0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb"],"blockTimestamp":0},"contractNativeMinterConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"durangoTimestamp":1707840000,"eip150Block":0,"eip155Block":0,"eip158Block":0,"feeConfig":{"gasLimit":8000000,"targetBlockRate":2,"minBaseFee":25000000000,"targetGas":15000000,"baseFeeChangeDenominator":36,"minBlockGasCost":0,"maxBlockGasCost":1000000,"blockGasCostStep":200000},"feeManagerConfig":{"adminAddresses":["0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"homesteadBlock":0,"istanbulBlock":0,"muirGlacierBlock":0,"petersburgBlock":0,"subnetEVMTimestamp":0,"txAllowListConfig":{"adminAddresses":["0x88fa61d2faa13aad8fbd5b030372b4a159bbbdfb","0x2505a33a144d9e63c3c53ee65c3108239e04a0af"],"blockTimestamp":0},"upgrades":{}}"

bhargavkacharla avatar Feb 19 '24 15:02 bhargavkacharla

Screenshot 2024-02-20 at 12 15 57 AM logs from validator node @darioush

bhargavkacharla avatar Feb 19 '24 18:02 bhargavkacharla

@darioush can you please look into this, I am stuck at this as subnet is down

bhargavkacharla avatar Feb 20 '24 05:02 bhargavkacharla

It's a bit hard to follow what state and configuration your nodes are in.

  • Could you provide more details about how you got the node in the state where it is producing the found mismatch between config on database vs. new config message? In any case a workaround for this error is to enable the skip-upgrade-check flag like before for that chain.
  • Nonce too low is coming from transactions that need to be (re-)issued correctly with the valid nonce that corresponds to the node's state. Since the nonce is lower than the node's, it's likely these transactions were issued before a transaction with the same nonce was accepted by the network. The logs are coming from eth_sendRawTransaction and it's not related to syncing the blocks from the subnet.
  • I'm not sure what the last log snippet is showing, is the last log from the validator from 02-16?

darioush avatar Feb 20 '24 06:02 darioush

@darioush

  1. removed old db and restarted validator and rpc node, it synced from scratch, while syncing subnet, got this error
  2. yes, that is from validator node

bhargavkacharla avatar Feb 20 '24 07:02 bhargavkacharla

Can you please provide full logs as txt? They will be more helpful than partial screenshots.

ceyonur avatar Feb 20 '24 07:02 ceyonur

this is running in ec2 instance, am not sure how to send the entire log file from VM

bhargavkacharla avatar Feb 20 '24 07:02 bhargavkacharla

You can find logs under ~/.avalanchego/logs folder

ceyonur avatar Feb 20 '24 08:02 ceyonur