perf(js): pre-build package index for O(n+m) pnpm lockfile pruning
Current Behavior
When pruning pnpm lockfiles, findOriginalKeys iterates through ALL packages for EACH external node, resulting in O(n×m) complexity:
┌─────────────────────────────────────────────────────────────┐
│ BEFORE: O(n × m) │
├─────────────────────────────────────────────────────────────┤
│ │
│ For each external node (n=500): │
│ ┌─────────────────────────────────────────┐ │
│ │ Iterate ALL packages (m=1000) │ │
│ │ ├─ lodash/4.17.21 → match? │ │
│ │ ├─ react/18.2.0 → match? │ │
│ │ ├─ typescript/5.0.0 → match? │ │
│ │ └─ ... (1000 iterations per node) │ │
│ └─────────────────────────────────────────┘ │
│ │
│ Total: 500 nodes × 1000 packages = 500,000 iterations │
│ │
└─────────────────────────────────────────────────────────────┘
Expected Behavior
Pre-build a package index Map once, then use O(1) lookups:
┌─────────────────────────────────────────────────────────────┐
│ AFTER: O(n + m) │
├─────────────────────────────────────────────────────────────┤
│ │
│ Step 1: Build index once (m=1000 iterations) │
│ ┌─────────────────────────────────────────┐ │
│ │ PackageIndex = Map<name, entries[]> │ │
│ │ "lodash" → [[key1, snap1]] │ │
│ │ "react" → [[key2, snap2], [key3,...]]│ │
│ │ "typescript" → [[key4, snap4]] │ │
│ └─────────────────────────────────────────┘ │
│ │
│ Step 2: For each external node (n=500): │
│ ┌─────────────────────────────────────────┐ │
│ │ index.get(packageName) → O(1) │ │
│ │ └─ Only check matching packages │ │
│ └─────────────────────────────────────────┘ │
│ │
│ Total: 1000 (build) + 500 (lookups) = 1,500 operations │
│ Improvement: 500,000 → 1,500 = 333× fewer iterations │
│ │
└─────────────────────────────────────────────────────────────┘
Implementation Details
Algorithm Change
┌────────────────────┐ ┌────────────────────┐
│ stringifyPnpm │ │ stringifyPnpm │
│ LockFile() │ │ LockFile() │
│ │ │ │
│ ┌──────────────┐ │ │ ┌──────────────┐ │
│ │ For each │ │ │ │ buildPackage │ │
│ │ node: │ │ ───► │ │ Index() once │ │
│ │ scan ALL │ │ │ └──────┬───────┘ │
│ │ packages │ │ │ │ │
│ └──────────────┘ │ │ ┌──────▼───────┐ │
│ │ │ │ For each │ │
│ O(n × m) │ │ │ node: O(1) │ │
└────────────────────┘ │ │ index lookup │ │
│ └──────────────┘ │
│ O(n + m) │
└────────────────────┘
Edge Cases Handled
| Case | Example Key | Solution |
|---|---|---|
| Standard v5 | lodash/4.17.21 |
extractNameFromKey with / separator |
| Standard v6+ | [email protected] |
extractNameFromKey with @ separator |
| Tarball | github.com/user/repo/commit |
Use snapshot name field |
| Alias | alias with version npm:[email protected] |
Extract real name from version |
Why Accept This PR
- Massive scalability win: O(n×m) → O(n+m) transforms worst-case performance
- Real-world impact: Large pnpm monorepos with 500+ dependencies see 333× fewer iterations
- Zero behavior change: All 108 lock-file tests pass unchanged
- Handles all edge cases: Tarball packages, aliases, v5/v6+ formats all tested
- Single-pass indexing: Index built once per
stringifyPnpmLockfilecall
Related Issue(s)
Contributes to #32669, #32254
Merge Dependencies
Must be merged AFTER: #33750
Deploy request for nx-docs pending review.
Visit the deploys page to approve it
| Name | Link |
|---|---|
| Latest commit | b5601566c642ea6c6def591ed67a0f397e62edd0 |
The latest updates on your projects. Learn more about Vercel for GitHub.
| Project | Deployment | Preview | Updated (UTC) |
|---|---|---|---|
| nx-dev | Preview | Dec 8, 2025 6:46am |
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
@adwait1290 We are very grateful for your enthusiasm to contribute, I kindly request that you please stop sending these AI assisted micro-perf PRs now. In future, please open an issue regarding your plans and do not simply send pages worth of micro PRs without open communication.
Upon deeper inspection in some cases, we have found that they are not resulting in real-world performance wins, and instead create regressions because they are not considering memory and GC overhead of the whole system.
We will work on better benchmarking infrastructure on our side to have greater confidence in CI as to whether these kinds of PRs are actually net wins but for now each individual PR requires a thorough investigation by the team and you are sending far, far too many.
To reduce noise on the repo, I am going to close this, but rest assured it will be looked at as part of our performance optimization and benchmarking effort and merged in if it creates a provable net win.
Thank you once again for your keenness to help make Nx the best it can be, we really appreciate it!
This pull request has already been merged/closed. If you experience issues related to these changes, please open a new issue referencing this pull request.