immer
immer copied to clipboard
Immer takes a long time to update the data.
🐛 Bug Report
Immer takes a long time to update the data.
Link to repro
To Reproduce
Chrome console:
console.time('create');
const dataSource = Object.fromEntries(
[...Array(4000).keys()].map((key) => [key, { key, data: { value: key } }]),
);
console.timeEnd('create');
console.time('update');
const result = produce(dataSource, (draft) => {
draft[1000].data.value = 100;
});
console.timeEnd('update');
Observed behavior
I checked the source code, the problem is in shallowCopy
。If you use Object.assign
instead, there is no performance problem
Time-consuming code:
...
const descriptors = getOwnPropertyDescriptors(base)
...
return Object.create(Object.getPrototypeOf(base), descriptors)
Expected behavior
Time shouldn't be so long
Environment
We only accept bug reports against the latest Immer version.
- Immer version:
- [x] I filed this report against the latest version of Immer
- [x] Occurs with
setUseProxies(true)
- [ ] Occurs with
setUseProxies(false)
(ES5 only)
At a quick glance it looks indeed that the shallowCopy used during finalisation could be more efficient in case the proto is Object, feel free to submit a PR if interested to explore further! I don't think the other call to it can be changed, if there are getters / setters present. Not fully sure :).
@MinJieLiu try this, it can reduce some running time.
setAutoFreeze(false);
@blinkcat In addition, the execution speed can be further optimized
Should we make setStrictShallowCopy(true || false)
about opt-out the strict logic?
The current immer copies data even with object references from draft, so shallowCopy optimization seems to work better than my thoughts.
It would be even better if we could delay prepareCopy until the actual write.
I submitted the PR for this. I don't know if it's a valid way of immer.
If this PR can't be merged, feel free to close it.
you can check this online example, we can see immer is more fast than before. https://codesandbox.io/s/optimistic-curran-4zgb7h?file=/src/index.js
@fantasticsoul I don't mind seeing immer reimplemented, but I do mind using our issue tracker as a billboard. I'll leave the comment here up, as is the unique selling point is perf, that might be relevant, but I will remove most of the other comments.
I patched my installation of immer with the key lines from #941 and I got a decrease from 70ms to 3ms in my redux reducer in a typical workload.
#941 definitely seems like a useful PR, which would make the perf optimization optional, just like setAutoFreeze
. That said, the optimization is safe because of the isPlainObject
check, so I would consider making it the default behavior.
Poking my head in with two thoughts:
- We'd absolutely love to see any perf optimizations, and something that assumes plain objects would be great for usage with Redux Toolkit!
- Just saw a post at https://dev.to/unadlib/mutative-10x-faster-than-immer-2060 talking about another Immer alternative that claims to be faster, and I was curious what the actual perf is and what actual implementation differences are responsible for those. (in other words, "I don't have time to look into this myself, but if anyone does and would like to write that up I'd appreciate it!" :) )
- Update: also just saw another "faster than Immer" lib at https://github.com/giusepperaso/structura.js , so I've got similar questions about its implementation differences
edit
Note from https://www.reddit.com/r/javascript/comments/108bnwo/i_made_a_typescript_library_similar_to_immer_but/ :
I assume that one of the reasons is that I used WeakMap a lot for caching, this and WeakSet are extremely optimized under V8 and by being weak, they can free memory as soon as they can (although there is no guarantee)
Another reason is that Immer uses Reflect.ownKeys and Object.getOwnPropertyDescriptors during the shallow copy, this allows for more correctness in some edge cases but is quite slow, and as far as I know this is not under a flag so it can't be disabled. In Structura this is disabled by default and can be enabled with a flag, in that case Structura is still faster but less faster, like max 8x times faster
Also there are some simplifications like there is no ES5 support and patches use a custom format which is easier to handle
:tada: This issue has been resolved in version 10.0.0 :tada:
The release is available on:
Your semantic-release bot :package::rocket: