rollup-plugin-typescript2
rollup-plugin-typescript2 copied to clipboard
test: add initial watch mode test suite
Summary
Creates a test harness/structure for watch mode tests and adds an initial test suite to it
- Built on top of #371 and #384
- Follow-up to https://github.com/ezolenko/rollup-plugin-typescript2/pull/371#issuecomment-1174339120
Details
-
test starting watch mode, changing a file, and adding a semantic error
- put this in a separate file as it has its own complexity to deal with (see below)
- initially put it inside
no-errors
, but it got pretty confusing; this structure is a good bit simpler - refactored this a couple times actually
- initially put it inside
- put this in a separate file as it has its own complexity to deal with (see below)
-
add two watch mode helpers
-
watchBundle
is very similar togenBundle
but with some watch mode nuances (like returning a watcher) -
watchEnd
is a bit complicated async wrapper around a listener interface to make imperative testing simpler - refactor: abstract out
createInput
andcreateOutput
to be used in bothgenBundle
andwatchBundle
- refactor: make sure
dist
output gets put into a temp test dir- the previous config made it output into rpt2's
dist
dir, sincecwd
is project root (when running tests from project root) - the Rollup API's
watch
func always writes out; can't just test in-memory like withbundle.generate
- so the
dist
dir becomes relevant as such
- so the
- refactor: pass in a temp
testDir
instead of thecacheRoot
- we can derive the
cacheRoot
and thedist
output fromtestDir
- also improve test clean-up by removing
testDir
at the end, not just thecacheRoot
dir inside it -
testDir
is the same var used in the unit tests to define a temp dir for testing
- we can derive the
- the previous config made it output into rpt2's
- refactor: make sure
-
-
change the
no-errors
fixture a tiny bit so that changing the import causes it to change too -
this ended up being fairly complex due to having to handle lots of async and parallelism
- parallel testing means fixtures have to be immutable, but watch mode needs to modify files
- ended up copying fixtures to a temp dir where I could modify them
- async events are all over here
- had to convert a callback listener interface to async too, which was pretty confusing
- and make sure the listener and bundles got properly closed too so no leaky tests
- apparently
expect.rejects.toThrow
can return a Promise too, so missed this error for a while- refactor: make sure the error spec awaits too (though I think the errors happen to throw synchronously there)
- had to convert a callback listener interface to async too, which was pretty confusing
- and also found a number of bugs while attempting to test cache invalidation within watch mode
- thought it was a natural place to test since watch mode testing needs to modify files anyway
- had to trace a bunch to figure out why some code paths weren't being covered -- see below section
- testing Rollup watch within Jest watch also causes Jest to re-run a few times
- I imagine double, overlapping watchers are confusing each other
- might need to disable these tests when using Jest watch
- high complexity async + parallel flows makes it pretty confusing to debug, so this code needs to be "handled with care"
- also this high complexity was even harder to debug when I'm having migraines (which is most of the time, unfortunately)
- hence why it took me a bit to finally make a PR for this small amount of code; the code was ok, the debugging was not too fun
- that and I just didn't have time to make the PR for a bit / investigate the bugs below
- hence why it took me a bit to finally make a PR for this small amount of code; the code was ok, the debugging was not too fun
- also this high complexity was even harder to debug when I'm having migraines (which is most of the time, unfortunately)
- parallel testing means fixtures have to be immutable, but watch mode needs to modify files
Found several bugs
Per above, in the process of writing this, I discovered lots of bugs with watch mode itself as well as cache invalidation during watch mode. This took some tracing to figure out, so see this annotated trace log I manually created:
Trace log:
❯ npm run test:coverage -- watch
> [email protected] test:coverage
> jest --coverage "watch"
console.log
options:
at Object.log (src/index.ts:95:12)
\\\\\ this is only called once! /////
console.log
tscache constructor:
at new log (src/tscache.ts:116:11)
console.log
cache.match:
at RollingCache.log [as match] (src/rollingcache.ts:45:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
buildEnd:
at Object.log (src/index.ts:257:12)
at async Promise.all (index 0)
\\\\\ walkTree is called, even though this is the first run, so there is no need to call it again /////
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
\\\\\ dirty + ambientTypesDirty but this is the first run -- these were just written /////
console.log
isDirty:
checkImports: true
ambientTypesDirty: true
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
label: { dirty: true }
at TsCache.log [as isDirty] (src/tscache.ts:299:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
isDirty:
checkImports: true
ambientTypesDirty: true
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
label: { dirty: true }
at TsCache.log [as isDirty] (src/tscache.ts:299:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
isDirty:
checkImports: true
ambientTypesDirty: true
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
label: { dirty: true }
at TsCache.log [as isDirty] (src/tscache.ts:299:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
isDirty:
checkImports: true
ambientTypesDirty: true
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
label: { dirty: true }
at TsCache.log [as isDirty] (src/tscache.ts:299:11)
\\\\\ options called again, but tscache's constructor wasn't! this also means that all caches were rolled and not reset! /////
console.log
options:
at Object.log (src/index.ts:95:12)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
buildEnd:
at Object.log (src/index.ts:257:12)
at async Promise.all (index 0)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/index.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
console.log
cache:
id: /rollup-plugin-typescript2/__tests__/integration/__temp/watch/fixtures/no-errors/some-import.ts
at TsCache.log [as getCached] (src/tscache.ts:255:11)
PASS __tests__/integration/watch.spec.ts (6.672 s)
✓ integration - watch (2931 ms)
----------------------------|---------|----------|---------|---------|------------------------------------------------------------------------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------------------------|---------|----------|---------|---------|------------------------------------------------------------------------------------
All files | 71.52 | 45.09 | 67.79 | 73.26 |
check-tsconfig.ts | 80 | 25 | 100 | 80 | 10
context.ts | 61.9 | 26.66 | 66.66 | 60 | 28-37,44,51
diagnostics-format-host.ts | 75 | 100 | 33.33 | 85.71 | 10
get-options-overrides.ts | 57.89 | 37.5 | 22.22 | 66.66 | 36,43-51,61-62,67-68
host.ts | 77.19 | 69.23 | 91.66 | 74 | 21-22,52,70-87
index.ts | 79.48 | 55.26 | 58.33 | 86.11 | 62,114,117,135,150,162,179,203-207,218,240-241,262-266,284,293,302,320,324,330-332
parse-tsconfig.ts | 88.88 | 64.28 | 100 | 88.88 | 18,28,35-36
print-diagnostics.ts | 17.39 | 0 | 50 | 17.39 | 14-43
rollingcache.ts | 79.48 | 60 | 87.5 | 79.48 | 47-48,54-63,72,80
rollupcontext.ts | 28 | 0 | 20 | 18.18 | 14-49
tscache.ts | 71.72 | 45 | 85.29 | 70.14 | 66,82-96,122-123,154-177,194-195,201,253,261-270,302,305,310-326
tslib.ts | 80 | 100 | 100 | 80 | 18-19
tsproxy.ts | 100 | 100 | 100 | 100 |
----------------------------|---------|----------|---------|---------|------------------------------------------------------------------------------------
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 6.89 s, estimated 7 s
Ran all test suites matching /watch/i.
Per the annotations, there are a few bugs:
-
walkTree
seems entirely unnecessary: it duplicates the existing type-checks- I started writing a test to show this as well; if
walkTree
usesRollupContext
instead ofConsoleContext
we get duplicate warnings in the tests when usingabortOnError: false
.
- I started writing a test to show this as well; if
- The
tscache
constructor is only called once since it's created on plugin init and not in theoptions
hook- This means it's never reset in watch mode:
- That means that once it's rolled after the first build, it's never reset, so the cache effectively is only used for the first build on watch
- Using the
closeWatcher
hook to roll caches in watch mode instead of duringbuildEnd
may be a workaround for this
- Using the
- That means ambient types are never re-checked as well
- That means that once it's rolled after the first build, it's never reset, so the cache effectively is only used for the first build on watch
- Note that this problem existed even before my cache refactors
- Those didn't change any actual functionality after all
- And I double-checked after this as well,
init
wasprivate
before andclean
was only called whenpluginOptions.clean
- This means it's never reset in watch mode:
- Similar to above, there may be problems (and optimizations to be made) as not all plugin vars are reset during the
options
hook in watch mode nor during thewatchChange
hook- possible optimization related to #177 would be to re-use the TS
DocumentRegistry
between watch cycles instead of resetting it in theoptions
hook - thought about
noErrors
not being reset, but that's only used when not in watch mode, so maybe there aren't as many issues?
- possible optimization related to #177 would be to re-use the TS
- Despite having a test here that checks for cache invalidation on
checkImports
, the coverage report shows that the imports are never checked in watch mode (at least)- Per the trace annotations, this seems to be because on the very first write, nodes and ambient types are marked as
dirty
. Since, in the current buggy watch mode, these are never reset (per above), they will always bedirty
if starting from an empty / clean cache.- Need to test starting from a non-empty cache in both watch and non-watch; hopefully the problem is only when starting fresh, which means fixing the lack of reset during watch mode above may fix this too. If not, the cache may be dirty the vast majority of the time; that coupled with the cache rolling after only 1 run may make it quite inefficient (and it's already more FS intensive than
noCache
mode, with I/O / FS being a general bottleneck to watch out for).
- Need to test starting from a non-empty cache in both watch and non-watch; hopefully the problem is only when starting fresh, which means fixing the lack of reset during watch mode above may fix this too. If not, the cache may be dirty the vast majority of the time; that coupled with the cache rolling after only 1 run may make it quite inefficient (and it's already more FS intensive than
- Per the trace annotations, this seems to be because on the very first write, nodes and ambient types are marked as
Related: Since I manually wrote debug log traces to track these down, it might be good to straight up add tracing / output a trace in debug verbosity, whether via library, runtime-native, or other. This could potentially obviate the need for debug logging as well and be a very powerful debugging / telemetry tool.
Failing Windows tests seem to be due to #385 , not the tests here
- I think walktree in watch mode is supposed to print out all existing errors, not only errors from the last watch round. For example if there are 2 errors in 2 unrelated files and user fixes one of them, rollup would send that file to transform and that error would go away. But we still want to print out the second error (otherwise user would think all errors are gone)
- Dirty cache in watch is probably ok -- rollup would only send changed files for transform. Walktree over all other files should still use the cached errors for the files that are untouched in this round.
- Yeah, DocumentRegistry is probably a perfect thing for watch mode, I think it is designed for IDEs that keep retranspiling subset of files when users edit them
Tracing instrumentation is good, maybe add a new verbosity level if you plan to add massive amounts of output
Rebased with master
and fixed the merge conflict -- all tests pass now w/ 91% coverage!
Will respond to follow-ups in a separate comment
- I think walktree in watch mode is supposed to print out all existing errors
Yea, so I think the problem is that walkTree
doesn't filter already checked files, and, as such, it duplicates any type-check errors that occurred during transform
.
As an example, if index.ts
changes, it'll go through transform
, get type-checked there, and then get type-checked again in walkTree
. That duplication is visible in the trace log. It also is a de-optimization since type-checking gets duplicated.
Since #345 implemented the checkedFiles
Set, we could use that to filter, but then walkTree
would basically only type-check type-only files that were changed in watchChange
... which isn't particularly useful as #345 serves the same purpose (although it's implemented differently; and neither covers imports of type-only imports, since references of type-only files aren't added to the tree/graph).
I suppose we could create another var, checkedFilesCycle
, that gets reset during options
or something, so that would only contain files that went through transform
in this current cycle.
Really what we want is to check all files that are dirty
though, esp. after the fixes in #369, as we now know better when types and compiled JS code can change based on import changes. Right now, I believe we'll miss declarations for some files during watch too -- almost surprising that more watch mode bugs haven't been filed (guess they're hard to notice).
The shouldTransformCachedModule
hook is basically what this is for. It was only added in Rollup 2.64.0
though, so it's a relatively new feature.
We could basically return true
if the node isDirty
(we may want to expose that function in a limited way; it's currently private
). That would make things a bit slower, same as #369, but more correct.
Aside: did also look around and only found two Acorn plugins for TS; giving Rollup better knowledge of the TS AST could potentially simplify the integration, particularly when it comes to module graph issues like this, but there's some hurdles to that approach too 😕 .
- Dirty cache in watch is probably ok -- rollup would only send changed files for transform. Walktree over all other files should still use the cached errors
4.
dives into this more, but on a fresh cache, literally every watch cycle is considered dirty
since the initial state of nodes and ambientTypesDirty
is never reset.
I believe this goes more than fresh cache upon further thought; if they're marked as dirty
in any watch cycle, they'll always be dirty for every other watch cycle moving forward, because they're never reset or otherwise marked as dirty: false
once tscache
is initialized.
So while this bug seems more limited in scope than what I thought when I first discovered it, I'm fairly certain this impacts fresh cache (for watch and non-watch) and watch mode cache. Will need to trace a bit more to confirm though.
- Yeah, DocumentRegistry is probably a perfect thing for watch mode, I think it is designed for IDEs that keep retranspiling subset of files when users edit them
Yea I thought about using it when reading #177, then I realized it's already in the codebase so was like "nvm". And then here I realized it's reset during watch mode, meaning that it's basically not used in the scenario where it would probably be most useful.
Tracing instrumentation is good, maybe add a new verbosity level if you plan to add massive amounts of output
Was gonna add a trace
level (which is a commonly used name) if needed, but it might also just replace debug
.
On initial investigation, this seems to be relatively complex, since most tracing libraries are built for the request/response lifecycle and specific frameworks, custom instrumentation might be required for CLI (ish) tools like this one , which is gonna require more effort/complexity (and unknowns) to set-up 😕
At that point, I also thought about babel-plugin-istanbul
, which similarly does instrumentation for a different purpose, code coverage. That would require like two different builds (i.e. a debug/trace one) and adding Babel to the build pipeline, so that's non-trivial either 😕
If possible, shimming something like c8
in (or otherwise getting Node native coverage working) could be the simplest thing.
So TBD on tracing, more work than I first anticipated, apparently setting up tracing in micro-services seems easier to do out-of-the-box (I work as a k8s + Istio SME in my day job) than it is for a CLI (ish) library 😅
Rebased with
master
and fixed the merge conflict -- all tests pass now w/ 91% coverage!
@ezolenko not sure if changing your GitHub notification settings may have broke something, so figured I'd send a friendly ping as I've got this PR and a dozen+ others waiting now (and then another handful built on top of those, left as "Drafts" currently 😅 ).
No worries if you just haven't had time to review these! Just thought it might be due to the notification settings changes (as that was recent) instead of time, so sending a ping just in case.
@agilgur5 it was both, I killed all emails and was offline for 2 weeks :) I'll review the PRs in the next few days...
@ezolenko have you had any time for reviews recently? There's still a dozen+ PRs in the queue and I have more on top of those as well (built on top of some of the current drafts etc) 😕
A patch release has also been needed for a bit now as some fixes have been sitting unreleased for ~2 months now and we've had at least two users request one. I was planning to PR a dist/
build once the queue were merged (which has taken longer than I originally anticipated) 😅