Grant Forrest
Grant Forrest
I'm particularly curious to profile storage usage and memory footprint in real life. But also exploring datasets without opening IDB directly.
I suspect I didn't think deep enough when I decided this wasn't possible early on... Supposing each schema uses a non-sequential ID... just a generated value. Or even a hash....
When unconfirmed changes are batching up, if two changes are made in a row to the same field which are overwriting, drop the first to reduce total number of operations...
Right now, `.put`, `.delete`, `.deleteAll` can't be used inside a batch, because they're async. Can I allow passing a batch name to include them in an existing batch?
I know I left a lot of these comments around, I need to go back and review them.
- [x] Batching tweak config - [ ] Deep change subscription - [ ] What each field type means
Querying an index for the value `null` doesn't match anything, even if docs have null for that index. The query is matching `"null"` because IDB can't query null directly, but...
Allow issuing tokens which have read/write access specified for particular collections in the schema. "Read" could be assumed for any non-specified collection. This enables use cases where a privileged group...
Maybe only if mutations are included?