hardhat
hardhat copied to clipboard
config.mocha.spec nor .file have no effect
This issue is similar, but different to #1544. The latter is requesting the ability to specify which tests mocha
should run via CLI arguments to hardhat test
which are glob patterns. In contrast, this test is concerned specifying which tests to run via the hardhat config.
I expected that this would be possible, since it is strongly implied by https://hardhat.org/config/#mocha-configuration which says:
You can configure how your tests are run using the
mocha
entry, which accepts the same options as Mocha.
and links to https://mochajs.org/, under which the config section says:
Test files can be specified using
spec
, e.g.,"spec": "test/**/*.spec.js"
.
Putting these together, I expected that setting my hardhat.config.js
to something like:
module.exports = {
paths: {
artifacts: './build/contracts',
deployments: 'deployments',
},
mocha: {
spec: [
'test/**/*.test.js',
],
},
defaultNetwork: 'hardhat',
...
};
would do the trick. But it turns out that this spec
config parameter is completely ignored!
AFAICS, the guilty code is:
https://github.com/nomiclabs/hardhat/blob/854a149835ed0763bb7396189f299fd094409f61/packages/hardhat-core/src/builtin-tasks/test.ts#L21-L41
which clearly hardcodes **/*.js
, or **/*.ts
if running in typescript mode.
This seems like a missed opportunity, because hardhat does indeed pass the config.mocha
config object through to mocha:
https://github.com/nomiclabs/hardhat/blob/854a149835ed0763bb7396189f299fd094409f61/packages/hardhat-core/src/builtin-tasks/test.ts#L45-L54
but then any spec
provided in the config is overridden since hardhat passes an explicit list of files to test through to mocha:
https://github.com/nomiclabs/hardhat/blob/854a149835ed0763bb7396189f299fd094409f61/packages/hardhat-core/src/builtin-tasks/test.ts#L96-L104
If the user doesn't pass a list of test files via the CLI, why not let mocha perform the discovery itself rather than having hardhat do it? That way, mocha's existing test discovery configuration mechanism can be utilised.
This seems like a sensible thing to do, and something that shouldn't break anything (you could argue that it would be a bug fix). So, to clarify, there are four combinations here that depend on if the user set the spec
config and if the user passed files via the CLI:
- The user didn't set a spec nor passed files as arguments: all tests are run
- The user didn't set a spec and passed some files: the files are run
- The user set a spec but it didn't pass any files: the spec is used
- The user set a spec and passed some files: ???
I'm not sure about the fourth one. Should it ignore the spec and run those files? Should it only run the files that were passed and match the spec? (IMO the first option is the correct one; we could also maybe throw in this case).
@alcuadrado what do you think?
@fvictorio commented on August 24, 2021 6:22 PM:
This seems like a sensible thing to do, and something that shouldn't break anything (you could argue that it would be a bug fix). So, to clarify, there are four combinations here that depend on if the user set the
spec
config and if the user passed files via the CLI:
- The user didn't set a spec nor passed files as arguments: all tests are run
- The user didn't set a spec and passed some files: the files are run
- The user set a spec but it didn't pass any files: the spec is used
Yes these all make sense.
- The user set a spec and passed some files: ???
I'm not sure about the fourth one. Should it ignore the spec and run those files? Should it only run the files that were passed and match the spec? (IMO the first option is the correct one; we could also maybe throw in this case).
I think both are arguably correct, but I tend to agree with you that the first is maybe more intuitive. Or maybe allow the spec to filter out files which were passed but also issue warnings that they were filtered out. It reminds me a bit of the difference between git add
and git add --force
.
It reminds me a bit of the difference between
git add
andgit add --force
.
That's interesting, can you expand on that?
The .gitignore
system allows filtering of which files get picked up by git add
, but not of which files get picked up by git add --force
.
The spec
config is a similar file filter, and you are debating whether the files passed as CLI arguments should have this filter applied or not. So you could make it so that the filter is applied by default, but not if an additional --force
option is specified.
Hey @aspiers!
Thanks for creating this issue and digging this deep on it.
- The user set a spec and passed some files: ???
I mostly think that we should do what mocha does. It runs everything that's covered by the spec
and the file. I just tried it in a new project.
I have a few questions though:
- Does it work out of the box if we don't call any
addFile
? I believe that's part of the CLI "part" of mocha, and not really exported by the library interface. Take a look at this file. - If we want to behave like mocha, or if it doesn't work out of the box, would we need to implement this "collect-files" functionality ourselves? Or can we use that file?
Those all sound like good questions to me - sorry I don't have answers immediately to hand!
While waiting for official support you can set different test paths using hardhat tasks:
task("test")
.setAction(async (taskArgs, hre, runSuper) => {
console.log(`Running tests within ${hre.config.paths.tests}`);
taskArgs["deployFixture"] = true;
return await runSuper(taskArgs);
});
task("test:backtest")
.setAction(async (taskArgs, hre) => {
hre.config.paths.tests = path.join(path.parse(hre.config.paths.tests).dir, "back_tests");
return await hre.run("test");
});
This works pretty well for me, hopefully this helps you guys out as well
This issue was marked as stale because it didn't have any activity in the last 30 days. If you think it's still relevant, please leave a comment indicating so. Otherwise, it will be closed in 7 days.
I believe this issue is still relevant.
On a more general note, my personal recommendation (which of course you are free to take or leave) is that it is often more productive to leave issues open until there is a clear resolution (e.g. a fix, or a decision such as "we're not going to implement this").
When issues or feature requests are marked as closed without resolution, then people are more likely to miss them, resulting in duplicate reports. And people searching for ways to contribute are more likely to miss them too. Here's someone else's blog post on the same topic.
However I understand it is annoying to have open issues going nowhere. IMHO one solution for this is to label short-lived and long-lived issues differently to distinguish them. Another is assigning issues to different milestones. This makes it clear to users which issues are likely to be fixed sooner rather than later. This way you could disable the auto-closing of stale issues which the bot is currently configured to perform, whilst keeping the automatic addition of the Stale
label.
Hope this is useful food for thought. Thanks again!
I agree with you @aspiers. I'm not super convinced that the value that the auto-close bot adds (which is non-zero, tbh) is worth the downsides. This is something we are experimenting with, so we might just as well stop doing it.