hardhat
hardhat copied to clipboard
Hardhat Network stops working after resetting it with an invalid URL
Reproducing this is pretty easy:
- Create a hardhat project
hh console- Reset the network with an invalid URL, e.g.:
await network.provider.send("hardhat_reset", [{ forking: { jsonRpcUrl: "https://www.example.com" } }]) - Try to reset it to a non-forked state or to a valid URL:
await network.provider.send("hardhat_reset", [])
The last step won't work. The reason is a bit subtle. When you reset the network, the node is set to undefined:
https://github.com/nomiclabs/hardhat/blob/854a149835ed0763bb7396189f299fd094409f61/packages/hardhat-core/src/internal/hardhat-network/provider/provider.ts#L321
Then when _init is called, the provider will try to create a node:
https://github.com/nomiclabs/hardhat/blob/854a149835ed0763bb7396189f299fd094409f61/packages/hardhat-core/src/internal/hardhat-network/provider/provider.ts#L232
But this will fail, because the node creation (when forking) implies sending some transactions (for example, to get the nonces of the initial accounts). So the _init will fail.
The problem is that, if you then send a hardhat_reset call to fix this, the init method will be called first because the node is undefined. And since the this._forkConfig is still invalid, the node creation will fail again.
I'm not really sure what the right solution is. The first things that come to mind is that _reset should catch any error thrown by _init and restore the this._forkConfig value to the one it had before. This isn't perfect though: if the URL in your config is wrong, then the same problem will happen. This is a more edge case, but it suggests that there is a design issue here.
For the record, I don't think any user has reported this, so this is a low-priority issue.
Would like to bump this issue and see it get fixed.
@lzalvin can you expand on how this is affecting you?
@fvictorio thanks for the response. I haven't quite isolated the cause but I have observed that calling hardhat_reset with an RPC that is down (which happens quite often) might render the hardhat node unusable and will require the node to be restarted.
For my purposes, I would like to be able to catch if there are any errors with hardhat_reset and try giving it a different provider but am unable for the reason above.
Oh I see, thanks, that's very helpful.
This issue was marked as stale because it didn't have any activity in the last 30 days. If you think it's still relevant, please leave a comment indicating so. Otherwise, it will be closed in 7 days.
Would still love to see this fixed : )