colonyNetwork
colonyNetwork copied to clipboard
Migrate test framework to Hardhat
Initial drop-in migration using an external RPC node for testing
-
contracts-network
-- PASSING ✅ -
cross-chain
-- PASSING ✅ -
extensions
-- PASSING ✅ -
packages
-- PASSING ✅ -
reputation-system
-- PASSING ✅ -
npx hardhat coverage
-- PASSING ✅ -
REPORT_GAS=true npx hardhat test
-- PASSING ✅
TO DO:
- Migrate smoke tests (see https://github.com/NomicFoundation/hardhat/issues/3345)
NOTES:
- Both
provision:token:contracts
andprovision:safe:contracts
have been incorporated into the compile task.
Noting that the latest version of hardhat (2.21) introduced a new dependency: @nomicfoundation/edr which seems to break our build. So pinning to version 2.19 with a tilde for now.
~Regarding the version check script, I'm not 100% sure. Given that the "last release" will run on truffle and the "current" version will run on hardhat we may need to update that file a few times over the next release cycle to keep the commands / paths accurate. I can keep working on it, but I'm also happy just saying that it won't work for releases which span truffle/hardhat.~
I was able to sort it out, seems to work fine now
Consider this approval, but with provisos:
- I'd like #1227 to be reviewed and merged first, as it's the sort of PR that the storage tests (disabled here) are intended to give confidence in.
- Not a fan of the renamed
TRUFFLE_FOREIGN
. The name was intended to capture whether or not the chain being denominated the foreign chain was the chain Truffle had deployed to.FOREIGN_CHAIN
to my ear is just whether there is a foreign chain or not, which is always the case when the relevant tests are run. Was there a reason you didn't just end up withHARDHAT_FOREIGN
? - I'm not super keen on being pinned to a old version of hardhat. I tried updating (which I agree didn't work out of the box), and installed the package that was missing explicitly, and it worked, but the package that was missing was linux-specific, and we need this to work on OSX as well. Maybe a set of optional dependencies in
package.json
would make that work? It's tough to tell. But we do need to figure it out, as otherwise we'll be back where we are currently with truffle and unable to test against newer forks locally. Let's not hold this up because of it, but do it in another PR imminently.
The tests should correctly respond to a request for a reputation state in previous states
and should correctly respond to a request for a reputation state in previous states with no proof
seem to have become spotty in this PR. It doesn't seem to be the case in CI, but is definitely the case locally.
Is this only for me, or do you see it as well?
EDIT: Maybe this is something I somehow back-propagated from my rebasing attempts, checking out in a new repo doesn't show this behaviour, as far as I can see... :thinking:
@area it's pretty spotty in CI for me, re-running the build usually sorts it. I've been ignoring it for the time being.
I've fixed them (I think), though I'm not totally sold on my fix. What was happening was that when we were explicitly calling client._miner.sync()
in those tests, the client was (sometimes) still in the process of handling blocks that it had seen, and the two sync processes would clobber each other. I'm pretty sure this is only behaviour that manifests in the tests - the miner is very gun-shy about calling .sync
itself, and we've written it in a way that shouldn't cause this.
To resolve, I've moved the initialisation of the client out of the beforeEach
and into the tests themselves. That means the block handler isn't active while we're doing the setup of the state we want. We then call initialise
, which includes a sync
.
However, this is only treating the symptoms. I don't know why this PR cause this behaviour to manifest itself. My best guess is that the specific queries that the client is doing to sync take longer than they did with the Truffle RPC endpoint, and so by the time we hit the sync
call on the original tests it sometimes hasn't finished. EDIT: Alternatively, everything else the tests are doing could be quicker, which would result in the same.
Are there any other newly-spotty tests you've noticed?
@area that was the only one that was consistently failing