Attempt at speeding up prisma-generator tests
In working on #1458 I noticed that some tests took me a while to execute. A lot of it seems to be centered around npm installs, so I picked out the prisma generator tests to experiment on.
As a baseline, current dev tests for that file took around 400 seconds on my system; with the npm installs taking up between 20 and 40 seconds.
With the --no-progress --no-audit --no-fund flags, total time for this file reduced to 300 seconds on my system.
I then looked at adding a test project directory, installing that in test-scaffold and copying from there each test. That reduced execution time to about 100 seconds on my system.
Given that this would seperate definition of the test context (which dependencies etc) from test execution, I wanted to try and avoid that if I could; so I looked at pnpm store. That is the version I finally submitted in this PR:
- Generally, we define a gitignored directory as a custom pnpm-store directory only to be used in the test suite.
- In beforeAll, we use a package.json and .npmrc written into a temp dir to load any missing dependencies into the store.
- In beforeEach, we can then write that same package.json and .npmrc and execute
pnpm install --offlineto quickly install from the store. On my system this executes at around 55 seconds with an empty store, and at around 45 seconds when I've already populated the store earlier.
To me, this approach seems like a nice middle ground:
- These tests are slightly less isolated then they were before, as they rely on the pnpm store. While that's unfortunate, it isn't that different from what we already have with test-scaffold.
- Given that the project already uses pnpm, I'd expect it shouldn't be a problem to use pnpm commands within tests. I am somewhat wondering whether pnpm-installed-via-npm setups might be an issue?
- Using a pnpm store specific to the complete workspace test suite keeps it insulated from the rest of the system and the actual workings of the project
- The pnpm store seems slighly beneficial to me over the test-scaffold-and-copy I had before, given that the context of the test (package.json) isn't as separated from the actual test.
Happy to hear your thoughts on this! Is this an approach worth taking? Do you see downsides I'm missing here? Are there other factors to consider? Do you see room for further improvement?
[!IMPORTANT]
Review skipped
Draft detected.
Please check the settings in the CodeRabbit UI or the
.coderabbit.yamlfile in this repository. To trigger a single review, invoke the@coderabbitai reviewcommand.You can disable this status message by setting the
reviews.review_statustofalsein the CodeRabbit configuration file.
Walkthrough
This update improves the JavaScript testing setup by introducing utilities to handle project directory initialization and package management, specifically targeting pnpm. By adding these utilities, setting up and managing dependencies in tests becomes more streamlined. Additionally, the .gitignore file is updated to exclude a specific test store directory. No changes were made to the declarations of exported or public entities from the primary codebase.
Changes
| File | Change Summary |
|---|---|
| .gitignore | Added /.pnpm-test-store directory |
| packages/schema/tests/generator/prisma-generator.test.ts | Introduced preparePackageJson and initProjectDir functions, removed execSync utilization |
| script/test-utils.ts | Added utility functions for package and project initialization and setup temporary directories |
Sequence Diagrams
sequenceDiagram
participant Developer
participant TestUtils
participant ProjectDir
participant PNPM
Developer->>TestUtils: Call preparePackageJson()
TestUtils-->>Developer: Returns modified package.json content
Developer->>TestUtils: Call initProjectDir()
TestUtils-->>ProjectDir: Create directory and write package.json
TestUtils-->>PNPM: Install dependencies
PNPM-->>TestUtils: Installation complete
TestUtils-->>Developer: Project directory ready
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
Tips
Chat
There are 3 ways to chat with CodeRabbit:
:bangbang: IMPORTANT Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai generate unit testing code for this file.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai generate interesting stats about this repository and render them as a table.@coderabbitai show all the console.log statements in this repository.@coderabbitai read src/utils.ts and generate unit testing code.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (invoked as PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
CodeRabbit Configration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Hi @WimTibackx, I'm sorry for the late response. Thanks for working on improving test speed. I've been bothered by it for quite a while but have been procrastinating on it 😅.
I think using a shared pnpm store is a great idea, which should make the installation process significantly faster, especially for running locally. From another perspective, I'm also wondering if we need to scaffold a npm project for each test at all. The tests were made this way mainly because Prisma and ZenStack both do code generation into node_modules and using disposable projects is the simplest way to isolate. However, since both Prisma and ZenStack allow specifying where to output the generated code, we can probably only emit them into some temp dir while not having a full-scale npm project at all.
What do you think about this direction? It's gonna be a bigger change though.
Hi @ymc9! It's about time I respond to this. Sorry for the delayed reply, I got a bit wrapped up in work and life for a bit.
I think using a shared pnpm store is a great idea, which should make the installation process significantly faster, especially for running locally.
I've been looking into this last weekend, but it requires some further tinkering. My original proof of concept probably used the easiest set of dependencies, unfortunately :) . The bigger dependency trees and the post-install scripts require some further work for this PR. I'm taking another shot at it today, will keep you posted.
I'm also wondering if we need to scaffold a npm project for each test at all. (...) since both Prisma and ZenStack allow specifying where to output the generated code, we can probably only emit them into some temp dir while not having a full-scale npm project at all.
That's a fair point indeed. As long as there's one full-on npm project integration test per module / variant, having other test cases run based on results emitting to a temp dir could suffice indeed.
For now, I'll see if I can universally apply the pnpm store approach, as that should already help a fair bit.
Closing for now. Will pick it up again as necessary.