E2e tests overhaul
This PR suggests a new approach to our e2e tests for the Hydrogen framework and showcases a few basic test-suites. This is a Proof of Concept, want feedback before devoting too much effort into it. While the broad characteristics of this overhaul are defined here and outlined below, there will be more methods and functions added as we move all of the e2e tests over.
Dynamic fixtures
Each app runs in an isolated sandbox that is created/removed before/after each test is run. This is different than our current set up, where all fixtures are checked into git and live in the packages/playground directory.
| Before: Static fixtures | After: Dynamic fixtures |
|---|---|
![]() |
![]() |
Our current setup has been problematic because:
- The
packagesfolder should only contain actual packages from our monorepo and the additional files add unnecessary bloat. More files tends to lead to more confusion when trying to understand a single e2e test. This is solved with Dynamic Fixtures because these files will not exist in the repo at all unless a test is running or the{persist: true}config is given (this is for debugging only). - Almost impossible to know instinctively what e2e tests corresponds to which fixture. This is solved with Dynamic Fixtures because only relevant fixtures are generated in the same folder as the tests that use them.
Generating the fixtures dynamically also follows the pattern of other frameworks (next, remix, astro, etc...).
In-source testing for routes and components
An even further extension of the above point, many of our e2e tests are route or component specific and now we can define tests beneath the code they are testing. This is different than our current set up, where all the tests live in a single suite for each fixture and all the routes defined within that fixtures routes folder.
Our current setup has been problematic because:
- It is difficult to understand what code an individual test is referring to. (there are dozens of routes). Defining a test in the same file as the source code makes this a no-brainer.
Additionally, tests can even share variables, etc... between the source and test. This also follows a trend in remix's Route Module API, where a route-level component has multiple exports (meta, links etc... ) a tests export falls right inline with this way of collocating everything in a top-level route file.
Contained setup and teardown
We export our own describe function that defines each suite and takes care of the nitty setup/teardown of each test. This is different than our current set up, where we reference at least 4 different files for the building of code, starting servers, and other tasks required for each e2e test suit to run properly.
The below files are not easy to reason about (this has come up in conversations with many on the team). The changes in this PR would remove all of them.
Our current setup has been problematic because:
- It is very difficult to follow how the full environment is constructed. By keeping this logic contained to a single function block that sets up the different primitives and injects them into the suite we can more easily understand and debug the full e2e environment.
The one draw back to this approach is that we are essentially "wrapping" vitest primitives and it may become unclear where that line is drawn. I might suggest we go full in on this approach and essentially re-export * from vitest inside of our test-framework so that a developer doesn't need to think about it. We could consider a "hydrogen/testing" package that sets this all up and a user test might look like:
import {describe, it, vi, beforeEach} from '@shopify/hydrogen-testing'
describe('...', () => {
//...
Support for testing in multiple environments
Each suite runs in 3 environments: Node development using a vite dev server, Node production using our platform entry, and Worker production using our platform entry and worker runtime and we can opt-out of any of these at the suite level. This is different than our current set up, where we pass in a config to each test and use conditional blocks to return early if we don't want a test to run in a specific environment.
Our current setup has been problematic because:
- Having control flow inside of a test is a bad practise
- Passing a config into a separate shared file of test cases is indirect and makes tests harder to follow.
The above conditional logic can be omitted if we are able to declaratively tell a test suite how and where to run.
Vitest
We have been moving away from Jest entirely in our repo, in favour of Vitest. Not much to add on this point because it is a hard requirement for vite 3 among other reasons (ESM support, etc).
cc @pepicrft @lemonmade @heimidal
Running these new tests locally
Checkout this PR and run yarn test-e2e-new.
Thanks for all the great questions @frandiox, some answers below:
It looks like this initializes the SandboxInstance once per describe and each environment, and the initialization is basically building or creating a dev server. I guess this means we need to be cautious when adding new describes because it can change the performance of the tests, and we should rather use it directly as much as possible. Is this correct?
Yes that's correct. There will be a balance between when we share a test suite (for performance) and we establish a new one (for isolation). I don't have an answer to this yet and will have to come out in practise, but important to remember that e2e is unlike unit testing in this way.
Related to the point above, what would be the "recommended" way to organize tests now (maximizing perf, I guess)? At first glance, it's not clear to me because now we have much more flexibility than before. For example, do we have a project test/css that has 1 describe and includes tests for everything related to CSS, including Pure CSS and CSS Modules? Should those two be split in different describes or in different projects?
Yes, this is a great observation. I've considered a few options to make this easy (ie: new hydrogen config === new test suite), but haven't come up with a clear answer just yet.
When do we write tests directly in test files instead of routes? When the tests are related to more than 1 single route? Any other use-case?
I think it would be great if we did as much as possible within the route files since that is a nice API, and I am trying to come up with a way to support this across the board (inclusive of multiple routes). Again, not a clear answer but it is an improvement I would like to make to this whole framework design.
Do you think that it will be possible at some point to run each describe/environment in a different process?
I do think this should possible and would be a big performance win.
@cartogram Would it make sense to optionally pass the hydrogen.config.js path to describe? Or perhaps we should just have different projects when testing things like "async config"?
@cartogram Would it make sense to optionally pass the hydrogen.config.js path to describe? Or perhaps we should just have different projects when testing things like "async config"?
I think given this model, things like async config would be a different fixture personally.
We detected some changes in packages/*/package.json or packages/*/src, and there are no updates in the .changeset.
If the changes are user-facing and should cause a version bump, run npm run changeset add to track your changes and include them in the next release CHANGELOG.
If you are making simple updates to examples or documentation, you do not need to add a changeset.
Oxygen deployed a preview of your bl-fix-note branch. Details:
| Storefront | Status | Preview link | Deployment details | Last update (UTC) |
|---|---|---|---|---|
| third-party-queries-caching | ✅ Successful (Logs) | Preview deployment | Inspect deployment | April 19, 2024 6:36 PM |
| vite | ✅ Successful (Logs) | Preview deployment | Inspect deployment | April 19, 2024 6:36 PM |
| subscriptions | ✅ Successful (Logs) | Preview deployment | Inspect deployment | April 19, 2024 6:36 PM |
| skeleton | ✅ Successful (Logs) | Preview deployment | Inspect deployment | April 19, 2024 6:36 PM |
| optimistic-cart-ui | ✅ Successful (Logs) | Preview deployment | Inspect deployment | April 19, 2024 6:37 PM |
| custom-cart-method | ✅ Successful (Logs) | Preview deployment | Inspect deployment | April 19, 2024 6:37 PM |
Learn more about Hydrogen's GitHub integration.

