swr
swr copied to clipboard
stableHash compare questions / concerns
I recently upgraded to SWR 1.1.0 and ran into some problems resulting from the switch from dequal to stableHash for SWR's compare option:
-
We use a custom fetcher that, among other things, turns timestamps into js-joda instances. dequal works fine for these; it's able to correctly check for equality by comparing the individual properties of the js-joda instances (even though they're custom classes). SWR's stableHash instead treats these as opaque types, so it always reports objects as changed.
This is a backward-compatibility-breaking change, and it wasn't clear to me from reading the release notes or changelog.
-
The documentation is out of date; it still states that dequal is used.
-
Comparing via stableHash appears to be 7-13 times slower than dequal (depending on the specific objects being compared). See https://codesandbox.io/s/swr-hash-test-1blcr.
I understand the advantages that stableHash gives for SWR keys (the "Built-in & stable serialization for SWR keys" highlight from the 1.1.0 release notes), but in light of these problems, it seems like it may be better to go back to dequal for comparisons?
Thanks for reporting this! First of all the intention wasn’t to make this breaking. If the behavior changes that should be fixed. We will also update the docs, great catch here.
I'll look deeper into the performance impact — but since each operation takes less than 0.01ms, I think it's OK and definitely not the bottleneck. But I will do the benchmark for a larger payload first.
Thanks for the quick reply!
In case it helps, in my initial tests, I used a 78MB payload (likely unrealistic; I had grabbed the wrong test file) and saw 360 ms for stableHash versus 78 ms for dequal. With more realistic payloads from my application (in the 150-250 kB range), I saw 0.04ms for dequal and 1.9 - 2.0ms for stableHash . (These are all on an M1 Mac and assume that stableHash needs to run once per comparison.) Like you said, that's likely not a bottleneck - although I can't help but put on my old-school C programmer's hat and think that it'd be nice to not give it up, if all other considerations are equal. 🙂
although I can't help but put on my old-school C programmer's hat and think that it'd be nice to not give it up, if all other considerations are equal. 🙂
Really appreciate this and I totally agree! I did a quick test just now and it's 3~4x slower for normal sized payloads (1 kB to 50 kB).
For the comparison part, if the difference is that huge then yes it makes sense to move away from the hash approach.
Just chiming in here to say that we just had to replace stableHash due to sporadic freezing issues when dealing with large responses (15-20 MB). CPU usage would go up to 100% and stay there, locking the entire tab indefinitely.
We opted to use react-fast-compare rather than dequal since we were already depending on it and now the issues are gone. I've also observed a major decrease in memory usage. Seems like a give or take 50% reduction.
Just chiming in here to say that we just had to replace
stableHashdue to sporadic freezing issues when dealing with large responses (15-20 MB). CPU usage would go up to 100% and stay there, locking the entire tab indefinitely.We opted to use
react-fast-comparerather thandequalsince we were already depending on it and now the issues are gone. I've also observed a major decrease in memory usage. Seems like a give or take 50% reduction.
I had the same problem. The page became unresponsive because the payload was 4 MB, and default deep compare fn (stable hash) was doing too much work. Ended up using fast-deep-equal and that made the page snappy again.
A temporary workaround is to pass a lightweight compare function to the compare option.
https://swr.vercel.app/docs/api#options
As https://github.com/vercel/swr/issues/1719#issuecomment-999918252 mentioned, a hashing function is not optimal for comparing data, and we could improve the performance by having an alternative compare function as the default.
For the comparison part, if the difference is that huge then yes it makes sense to move away from the hash approach.
hey team. stable hash seems to have a serious problem. It doesn't give correct result for two equal objects. (200 kb).
Unfortunately, I can't share my payload but it's definitely the case. fast-deep-equal gives correct result.
hey team. stable hash seems to have a serious problem. It doesn't give correct result for two equal objects. (200 kb).
Unfortunately, I can't share my payload but it's definitely the case.
fast-deep-equalgives correct result.
I was having a similar issue with cache misses. For me, the problem was instantiating cache keys from a class. The stableHash function doesn't treat these objects the same way as {} objects.
class CacheKey {
data?: string
constructor(data: string) {
this.data = data
}
}
// false
hash(new CacheKey('data')) == hash(new CacheKey('data'))
// true
hash({ data: 'data' }) == hash({ data: 'data' })
From what I can tell, it's because the hash function just checks if the arg.constructor == Object, which will be false for an object instantiated by a class, so even though two objects may serialize to the same string, the cache misses.
The solution for us was to just plain objects. The drawback was some minor refactoring when matching keys in the mutate function. It would have been nice to just mutate((key) => key instanceof CacheKey && //...