user-event icon indicating copy to clipboard operation
user-event copied to clipboard

Rethink testing strategy

Open ph-fritsche opened this issue 1 year ago • 19 comments

Problem description

  1. We run linting and tests on multiple node versions that we support according to https://github.com/testing-library/user-event/blob/ea5023141740123aede05561913cb2ec14952e2b/package.json#L13-L14 This has no merits. Our source code has no dependency on any NodeJS API, so that the result for the source itself will always be the same. The tested code is not the distributed code, so that if the different NodeJS versions yield different results, it only flags problems with our testing environment, but don't indicate anything about the build we would distribute from that commit.

  2. We don't test our build. Misconfigurations or bugs in our build tools result in broken builds being published and any potential fix is verified only by manually testing the build.

  3. We only test on Jsdom. We try to be platform agnostic, but out automation doesn't help us to develop software that works as intended in different environments, and the differences between Jsdom and e.g. Chrome are too numerous and wide for relying on manual testing only.

Suggested solution

  • [x] Remove source tests on older node versions.
  • [x] Refactor the tests so that they don't rely on Jest and could be run in different environments.
  • [x] Run the tests in a headless browser during CI.
  • [ ] Split up our tests in a) those that rely on mocks or test internals and b) those that only interact with the DOM and can be observed using only the main export.
  • [ ] Add at least a smoke test on different node versions using the build.

Additional context

No response

ph-fritsche avatar Aug 03 '22 11:08 ph-fritsche

/cc @nickmccurdy @timdeschryver @MatanBobi

ph-fritsche avatar Aug 03 '22 11:08 ph-fritsche

I was thinking of getting some Testing Library projects running on test frameworks other than Jest. Vitest would probably be a good one to focus on, as it's fast, easy to configure, mostly compatible with Jest, and supports enabling/disabling global injection.

nickmccurdy avatar Aug 03 '22 23:08 nickmccurdy

Can you leverage Vitest to run tests in a headless browser?

ph-fritsche avatar Aug 04 '22 07:08 ph-fritsche

@ph-fritsche great initiative, I highly agree with all of the points. Small thing about 3, I agree that the differences between JSDOM and Chrome are enormous (don't forget about happy-dom too), but our code is based on the standards and not on a specific engine implementation. If we'll also run tests using happy-dom for examples, some of our tests won't work and that's because happy-dom has some missing implementation. So what's our suggestion here? Should we run all our tests on a headless browser too? Running our tests on JSDOM is nice because a big part of our users are using JSDOM to run our library.

MatanBobi avatar Aug 17 '22 09:08 MatanBobi

I don't know if we should run (some?) tests in happy-dom too, but I think we need to run our tests in a headless browser. In the end the browser has been our point of reference all the time.

Some things are hard to test in a browser and also don't need to be tested twice - like the tests on correctly wiring the APIs through .setup().

There might be exceptions due to limited implementation in the environment, but in general a test using user-event should at least work both in Jsdom and headless Chrome - so our tests should run in those too.

ph-fritsche avatar Aug 17 '22 10:08 ph-fritsche

A little update here: I'm trying to make this work without rewriting too much of the tests.

Current approach is Karma+Jasmine with Jest's expect, spyOn and fn so that any tests without mocking could stay the same, as this will make it more likely that the test environment might be reused in other repos. jasmine types are a mess though and I'm not fond of maintaining those types myself by hand even if they're unlikely to change. So maybe this isn't practical.

Providing the transpiled files is already resolved. Letting Karma manage the files and adding a preprocessor was slow and inflexible. I wrote a little tool using chokidar and rollup (with @swc/core as transpiler) that allows to easily provide the necessary transpiled modules. It allows to use modules with ts paths and modules with node dependencies (replacing those with stubs). A Karma plugin can then update the fileList when files changed and provide them from memory. As that's too much code out of scope of this library, I started a repo for this at https://github.com/ph-fritsche/toolbox

ph-fritsche avatar Aug 31 '22 08:08 ph-fritsche

That sounds like an interesting approach. Is your goal to have a more minimal test environment, or specifically ensure Karma and Jasmine are supported? If the former, I think it could be easier to use Vitest, optionally with globals disabled.

nickmccurdy avatar Aug 31 '22 10:08 nickmccurdy

The goal is to run tests in at least one major browser (preferably Chrome) and in Jsdom. These tests should be run from and reported to the CLI so that you can just use e.g. the embedded console in Vscode.

Inspecting any test breakpoint in the browser console or walk through it per debugger would be nice but isn't strictly necessary. I'd like to be able to provide the tests in a manner that someone could just build them, add an adapter and run them against their environment to check for differences in the DOM implementation, but this also isn't strictly necessary.

Neither Karma nor Jasmine is a requirement. The more code I read, the more I start thinking that by the point I'll have understood the necessary configurations and plugin hooks, I could also have written the runner myself.

ph-fritsche avatar Aug 31 '22 10:08 ph-fritsche

I've had success with using Karma for that sort of thing in the past, but maybe a more modern alternative like Cypress or Playwright would be easier to set up now.

nickmccurdy avatar Aug 31 '22 10:08 nickmccurdy

Karma looks great, as it is built on a plugin system. But there are no types and the dependency injection makes it really hard to identify which interfaces are being used and which way to change some detail would be the "correct" one without causing undocumented side-effects. E.g. the best solution for adding files without breaking the interaction with other plugins seemed to be monkey-patching the files getter on the filesList implementation :facepalm: .

If I understood their documentation correctly, Vitest is running in node with Jsdom or Happy-dom. Cypress and Playwright only run in the browser and again couple our tests to a specific test environment.

ph-fritsche avatar Aug 31 '22 11:08 ph-fritsche

I'm aware, but maybe we could write an abstraction layer that could rust tests in something like Vitest and something like Cypress.

nickmccurdy avatar Aug 31 '22 11:08 nickmccurdy

Neither Karma nor Jasmine is a requirement. The more code I read, the more I start thinking that by the point I'll have understood the necessary configurations and plugin hooks, I could also have written the runner myself.

Little update here: I've dug through a lot of code, tried a few things and I reached a dead end every time. There is a multitude of specific problems but I think it can be boiled down to the following: Every test framework has some strong paradigm how the test code ends up in the process that executes it. Both sending the code to the process with the target environment (Node or a browser, but not both) and executing the test code are part of the respective test runner and they aren't compatible. So I've written the tools to run tests in Node and Chromium and it seems to work. There is still some work left to do and if someone wants to help, I'd accelerate documenting it - otherwise I'll try to resolve some remaining questions and then open a PR to adjust our tests and add that test environment to our CI.

ph-fritsche avatar Nov 06 '22 18:11 ph-fritsche

@ph-fritsche Is there something still open here? :) Need a hand?

MatanBobi avatar Mar 28 '23 12:03 MatanBobi

@MatanBobi Yes, your help is much appreciated. We need to complete #1091 by fixing or at least explaining the different results when running our tests in Chrome. If we can fix them, we can let the CI step fail on errors and make sure that any other PRs don't cause regression in compatibility with in-browser tests.

ph-fritsche avatar Mar 29 '23 08:03 ph-fritsche

@ph-fritsche It looks like the validate step there succeeded and the logs aren't retained anymore because the run is probably too old. I can't find a way to re-run that action though, am I missing something?

MatanBobi avatar Apr 09 '23 05:04 MatanBobi

@MatanBobi (After fixing the linting errors triggered by updated deps,) here's a new report.

ph-fritsche avatar Apr 09 '23 10:04 ph-fritsche

I am late to the party, but is there something I can do @ph-fritsche?

Christian24 avatar Sep 15 '23 20:09 Christian24

Hey! Would love to help with this as well. Set up very rudimental example where we run tests against happy-dom environment https://github.com/artursvonda/user-event/pull/1 but would love to have some help on how to set this up properly. Initially we'll probably need to set it up in a way that allows these tests to fail on happy-dom as there's work to be done on both sides of user-event/happy-dom until we get to green. I already opened ticket on happy-dom for xpath https://github.com/capricorn86/happy-dom/issues/1125

Let me know if this should be opened as a separate issue.

artursvonda avatar Oct 26 '23 08:10 artursvonda

Not sure entirely if this helps but I just stumbled upon this: https://github.com/material-components/material-web/blob/main/testing/harness.ts It seems Google has invested a good bit of effort to simulate clicks and the like. I'd imagine the one for Angular Material is probably even more extensive.

Christian24 avatar Jan 18 '24 21:01 Christian24