perf(core): optimize workspace config hash computation with caching
Current Behavior
The computeWorkspaceConfigHash function is called on every file change to detect if workspace configuration has changed. Currently it:
Object.entries(projectsConfigurations)
.sort(...) // Creates sorted array of [name, config] tuples
.map(([name, cfg]) => // Creates new array of strings
`${name}:${JSON.stringify(cfg)}`
);
return hashArray(projectConfigurationStrings);
This creates multiple intermediate arrays and always calls hashArray() even when configs haven't changed.
Expected Behavior
Optimized approach that:
- Avoids creating intermediate tuple arrays
- Caches per-project JSON strings to detect changes
- Returns cached hash when nothing changed (skipping
hashArray())
Before (every file change): After (with caching):
┌─────────────────────────┐ ┌─────────────────────────┐
│ File change detected │ │ File change detected │
└───────────┬─────────────┘ └───────────┬─────────────┘
│ │
▼ ▼
┌─────────────────────────┐ ┌─────────────────────────┐
│ Object.entries() + │ │ Object.keys().sort() │
│ sort() + map() │ │ (no tuple creation) │
│ [creates 3 arrays] │ └───────────┬─────────────┘
└───────────┬─────────────┘ │
│ ▼
│ ┌─────────────────────────┐
│ │ Compare JSON strings │
│ │ with cached values │
│ └───────────┬─────────────┘
│ │
│ ┌──────┴──────┐
│ │ │
│ No changes Has changes
│ │ │
▼ ▼ ▼
┌─────────────────────────┐ ┌───────────┐ ┌───────────┐
│ hashArray() called │ │ Return │ │ hashArray │
│ EVERY time │ │ cached │ │ + cache │
└─────────────────────────┘ │ hash │ │ result │
└───────────┘ └───────────┘
Performance Impact
For a workspace with 500 projects where configs rarely change:
| Scenario | Before | After |
|---|---|---|
| File change, no config change | ~15-50ms | ~5-15ms (skip hashArray) |
| File change, config changed | ~15-50ms | ~15-50ms (same) |
The optimization provides the biggest wins in the common case where source files change but project configs remain unchanged.
Changes
- Avoid tuple creation:
Object.keys().sort()instead ofObject.entries().sort() - Per-project JSON caching: Track JSON strings per project name
- Hash caching: Return cached hash when no changes detected
- Cache cleanup: Clear caches in
resetInternalState()
Why Accept This PR
- Zero risk: Same semantics, hash computation is identical
- Common case optimization: Most file changes don't affect project configs
- Daemon performance: This code runs in the daemon on every file save
Related Issue(s)
Contributes to #32265, #33366
Merge Dependencies
Must be merged AFTER: #33740
Deploy request for nx-docs pending review.
Visit the deploys page to approve it
| Name | Link |
|---|---|
| Latest commit | ed04515a6b767452c63c1a65aca069e3a854271d |
The latest updates on your projects. Learn more about Vercel for GitHub.
| Project | Deployment | Preview | Updated (UTC) |
|---|---|---|---|
| nx-dev | Preview | Dec 8, 2025 7:24am |
@adwait1290 We are very grateful for your enthusiasm to contribute, I kindly request that you please stop sending these AI assisted micro-perf PRs now. In future, please open an issue regarding your plans and do not simply send pages worth of micro PRs without open communication.
Upon deeper inspection in some cases, we have found that they are not resulting in real-world performance wins, and instead create regressions because they are not considering memory and GC overhead of the whole system.
We will work on better benchmarking infrastructure on our side to have greater confidence in CI as to whether these kinds of PRs are actually net wins but for now each individual PR requires a thorough investigation by the team and you are sending far, far too many.
To reduce noise on the repo, I am going to close this, but rest assured it will be looked at as part of our performance optimization and benchmarking effort and merged in if it creates a provable net win.
Thank you once again for your keenness to help make Nx the best it can be, we really appreciate it!
This pull request has already been merged/closed. If you experience issues related to these changes, please open a new issue referencing this pull request.