classic-level
classic-level copied to clipboard
POC: use shared ArrayBuffer for `nextv()`
This makes db.iterator() with buffer encoding as fast as an iterator with utf8 encoding. The approach is:
- In each
nextv()call, create astd::vector<char>to hold the raw data of multiple entries - Copy LevelDB slices directly into that vector with
memcpy() - Create an ArrayBuffer backed by the vector
- In JS, split it into Buffers, each backed by the same ArrayBuffer but using a different offset.
Apart from this being an incomplete implementation (it makes utf8 slower because the C++ side is buffer-only meaning JS has to transcode buffer to utf8), the approach has a downside: if userland code keeps a reference to just one of the Buffers, the entire ArrayBuffer is kept alive too. I.e. it costs memory.
For now this PR is just a reference. The ideal solution (for this particular bottleneck) sits somewhere in between. For example, I might take just the ArrayBuffer concept, to replace use of napi_create_buffer_copy().