chokidar
chokidar copied to clipboard
Refactor tests
The tests are a little hard to troubleshoot and extend because they are very coupled and are globally setup/torn down.
I found myself spending a lot of time commenting and uncomments big blocks of code to get to the case I was trying to troubleshoot (see below about only
as a solution) and disabling all but one of the handler variations at a time. Plus, if I wanted to add tests or test suites, it would be hard to know when to put them or mentally focus on them because all of the tests are in one file.
Proposal:
- put the tests in a tests folder
- put each functional area into its own file or directory
- create a utility function to provide the support to all of the tests
I tried to do this twice yesterday with slightly different approaches, but due to tight-coupling and unfamiliarity of the testing flow (I discovered many "interesting" test cases and data sets!), I had to bail.
Maybe you guys can layout the desired structure and a community member can refactor?
do you know about only
feature of mocha?
Some of my thoughts on the tests captured in https://github.com/paulmillr/chokidar/pull/396#issuecomment-155087328
I did end up spending a bunch of time recently refactoring the tests and accomplishing the first bullet on that list (separate fixture dirs for each test), which allows the tests to run in half the time they used to, although unfortunately the reliability (false negative failures in CI environments due to file systems being wonky) stayed about the same.
Having gone through this exercise, I no longer think trying to get the tests running in parallel with AVA will be effective, so I'm not going to heading down that road. And I have doubts that moving write operations to a child process will provide any value either, don't plan on investing any more time on that idea.
The value of the current structure is getting all the tests run for each mode. I suppose it could still be done even if the suites are split into separate files, but it might be a pretty superficial change.
As @paulmillr mentioned, if I'm narrowing down on something in particular I use .only
, and I was thinking of adding CLI flags like --fsevents
to control the mode more easily than commenting code (just a few lines at the top).
@paulmillr sorry, this might have been a little out of the blue...I've used tags / grep, skip, and filesystem glob patterns to filter tests, but not only. Only looks very useful.
I see there is shared setup and teardown logic for the tests which makes this a bit challenging.
Personally, I try to keep collections tests in separate files and folders for maintainability and separation of concerns. In general, I try to keep modules / files fitting on max two pages in my editor so I can easily reason about individual areas of logic. It might be because I'm no longer a youngen who can keep everything in his head...so you know I want to next talk about refactoring chokidar itself :wink:
For backbone-orm, we had shared tests across all variants and shimmed the active options using mocha's require options. Here's the structure: https://github.com/vidigami/backbone-orm/tree/master/test/spec/sync
I know this is a question of pragmatism and taste, but...
I guess the question here is: now that you have are familiar with the various test cases and testing logic, how would you structure them for the long term?
This issue is mainly about agreeing a desired structure so a volunteer can do the work (Maybe you guys can layout the desired structure and a community member can refactor?
).
The goal is to make the tests well structured and easy to extend / maintain so that the learning curve for open source contributors is low enough that they can easily add their own tests or suites of tests for issues or when creating functionality pull requests.
Personally, I would split out all of the top level describes into their own files and/or folders (eg. globs, symlinks, unwatch, close, etc) plus probably further categorize / breakdown the glob tests and create some setup / teardown helpers that can be configured locally in each test file rather than sharing any global configuration.
Additional suggestions:
- use npm script shortcuts to run each of the 'watching' methods independently, eg. all, fsevents, fs.watch (polling), fs.watch (non-polling)
- create a perf folder and tests like https://github.com/kmalakoff/walk-filtered and https://github.com/lodash/lodash
- use mocha's grep and tags in the tests or filesystem globs to run smaller subsections of the tests, eg. symlinks, globs, etc
Being able to set the mode from a CLI switch is something I've wanted for a while, but mocha's grep does not play nicely with .only
- so I haven't seen a way to easily use it to run one test in a specified mode.
Also getting through both npm
and istanbul
to pass the argv correctly means it looks something like npm test -- -- --grep fsevents
.
I started playing with something yesterday that might work out to be a good enough solution on this point. We'll see.
To answer a question you posed earlier:
how would you structure them for the long term
I don't have much of a problem with the current general structure, and as discussed there are some advantages that wouldn't be as easily accomplished if we followed your suggestions. Some of the points you've made about the virtues of these changes aren't really resonating with me so far.
I'm certainly open to changes, and rather than bikeshedding about any of the particulars, PRs welcome, imo. Sorry that I can't easily provide assurance ahead of time about whether I think any particular changes would be accepted when it comes to matters of personal style. Have to see the code to form an opinion regarding whether a big refactor increases clarity and maintainability or is just moving the sand around. It might just be a failure of imagination on my part.
@es128 understood. With my fresh eyes trying to understand and fix the tests, it is very clear in my mind that a simpler, more modular, and less coupled structure will be beneficial to contributors and maintainers - there's a lot of magic
/ coupled logic / tacit knowledge in there and lack of control over the running of the tests for a potential contributor to casually wade into! (you might be too close to / too accustomed to it to appreciate that fully)
That said, if you don't see it, then there's not much for me to say!
In case it wasn't totally clear...I was thinking this could be a great task for a junior developer who wants to get involved in a cool open source project so I'll leave this issue open, but will not be submitted a PR myself.
Being able to set the mode from a CLI switch is something I've wanted for a while, but mocha's grep does not play nicely with .only - so I haven't seen a way to easily use it to run one test in a specified mode.
Grep, only, glob, and the require option serve different purposes so what you are saying makes a lot of sense...for example:
-
grep would be used to run different groups of tests across files (in backbone-orm, we used it to select things like
quick
tests rather than exhaustive and to isolate things related torelationships
) - only is for isolating a specific test probably great for adhoc fixing of a single test
-
glob scopes by files so you could name multiple files with
glob
to run all glob tests if different categories warrant their own files (eg. large, complex tests or custom setup / teardown) -
require can be used for dependency injection to implement things like the modes (eg.
fsevents
)
Mocha provides a lot of great tools. The developer can choose the best tool for the job depending on what they are trying to do.
I think what this thread has helped to bring to light, at least for me, is that the primary target for refactoring should be the way the mode switching works at https://github.com/paulmillr/chokidar/blob/b54580ad83ac3e53175d6c0997a3f025e555cadc/test.js#L100-L106 as that is what ultimately is getting in the way of effectively using all of mocha's tools and proceeding with most of the other ideas that have been discussed, although I'm not sure what the answer is.
Would be happy to see contributions that show a better way.
@es128 I'd expand it a little...
- mode switching as that is what ultimately is getting in the way of effectively using all of mocha's tools
- adding a perf suite (separate issue?).
Yes, a perf suite would be nice, and is a separate issue imo. Once, long ago, I spent an hour trying to adapt gaze's benchmark code, but it didn't seem like it was working correctly and I abandoned it.
I think it'll be tough to get reliable benchmarks when triggering and detecting changes from within the same process.
Done: https://github.com/paulmillr/chokidar/issues/415
I've refactored walk-filtered tests and implemented fs-generate tests follow best practices to promote open source contribution.
Since they demonstrates what I believe is a self-documenting and pretty maintainable testing structure that new contributors would not have a big learning curve to come up to speed on, I thought I'd share...
Principles / best practices
(1) Each group of tests are in their own file
- benefit: it makes it easy to know where to look for tests, to put new tests, to understand if a new category / suite of tests needs to be created
(2) fixtures are controlled in each suite or test themselves (depending on the customization requirements of the suite or individual tests)
- benefit: it makes it easy to create a slightly different test scenarios. Take a look here for an example of why this might be useful.
- benefit - by decoupling the tests and making generation DRY, it makes it easier to make changes to the testing data for specific test needs. If fixtures are tightly coupled, developer may choose to not write tests or to make a lot of custom fixture code / flow because it is too risky to modify the globally-shared fixtures.
(3) tests are in a folder called test
- benefit: it meets developer expectations because is pretty standard these days to separate library code from test and perf code
(4) there are shared testing utilities. In this case, to spy on the types of stats) and the generic functionality was released in an npm module, fs-generate.
- benefit: following a shared testing utils patterns helps keep code DRY reducing errors and increases contributor productively since developers can reuse their code
(5) npm commands to run the tests
- benefit: it makes it easier to not have to remember specific testing commands and arguments so running of tests becomes more standard across libraries meaning you can hide details (eg. we use tape, another library uses mocha and istanbul) and developer intuitively know where to look and what to do.
Anyways, you can do what you want with chokidar, but I'd definitely put this under best practices to promoting open source contributions more than bike shedding (definitely some of the actually implementation decisions could be bike-shedded on, but I believe the principles / best practices probably less so).
Hi greetings, Sir, I just wanted to know how you got the idea to name it as 'chokidar' ?
@abhi0661236 readme