fast-memoize.js
fast-memoize.js copied to clipboard
serializing class instances for 2+ arguments
reading through the code, I found this:
if (arguments.length === 1) {
cacheKey = arguments[0]
} else {
cacheKey = serializer(arguments)
}
given that the default serializer uses JSON.stringify()
function jsonStringify () {
return JSON.stringify(arguments)
}
you'll potentially get wrong cache-hits for any memoized function calls that accept class instances (that don't "expose" all their props) as their arguments (given that you provide more than one argument).
example
// prep
class LotteryTicket {
constructor(code) {
Object.defineProperty(this, 'code', {
enumerable: false,
configurable: false,
writable: false,
value: code,
});
}
check(code) {
return this.code === code;
}
}
function checkWinner(ticket, code) {
return ticket.check(code);
}
var memoizedCheckWinner = memoize(checkWinner);
// test
memoizedCheckWinner(new LotteryTicket('xxx'), 'xxx'); // true, correct
// serialized cacheKey: {"0":{"0":{},"1":"xxx"}}
memoizedCheckWinner(new LotteryTicket('yyy'), 'xxx'); // true; incorrect
// serialized cacheKey: {"0":{"0":{},"1":"xxx"}}; wrong cache hit
Thanks for the well written bug report. Will make a test to cover this case and push a fix.
The trick here is how to make it as fast as possible. Will have to implement a couple different algorithms and add to the benchmark suite.
happy to help.
I understand that the speed is the main priority, but I'd consider it a rather serious flaw - it should be at least documented :)
Another potential issue: since you're using instances as cache keys, you potentially keep the instances from being GC'd and hogging the memory - just something to think about :)
Totally true. Maybe WeakMap can help with that last issue?
sure, but here's a twist: how will you serialize and compare the arguments?
You could create a special cache for a calls with Objects in arguments.
This will be a serious perf hit... :/
Since this lib is very focused on being fast, I'm starting to think the only way to maintain speed is to expose some complexity to the user.
Like, if you want to memoize a function that will receive non-primitive arguments, do like:
const memoized = memoize(fn, {
optimizeFor: 'object'
})
For primitive types:
const memoized = memoize(fn, {
optimizeFor: 'primitive'
})
Or if you (as a client of the lib) don't want to think about details, just use a common solution:
const memoized = memoize(fn)
This way the memoize function works as a factory, returning the best solution for each case.
Thoughts?
I'd just handle the one argument scenario and document the complexity without implementing. current solution will work for most of the cases, so it should be good.
However, it needs to be documented/mentioned :)
You could provide the proposed solution as well, given that it's documented too :)
WDYT?
Sorry intruding on this, but I would love to have it fixed. So, what if we had something like:
const hash = Array.prototype.slice(arguments)
.map(item => typeof item === 'function' ? item.toString() : JSON.stringify(item))
.join('|');
Then, hash is how you index your result.