Greg Look
Greg Look
It is beginning to seem like many use-cases for merkle-db will wind up self-managing database root pointers instead of using the built-in connection protocol and merkledag-ref tracking components. To support...
The `table/scan` method should have a simple consumer API for making _prefix scans_ on the table. This likely has specific behavior for each lexicoder, but it may be possible to...
Do some design work and determine what a good partition-centric API for large bulk read/update jobs looks like.
As an optimization, instead of returning two-element vector tuples for each `[key data]` record, use `clojure.lang.MapEntry` instead. This would probably lead to both memory and speed improvements, since we'd no...
The current implementation uses a linear scan over sorted values in two places when reading from a table. One is finding the children of an index node to forward queries...
Write some projects to demonstrate example use-cases and add them to the repo under `examples/`. This will reify some of the scenarios described in the usage doc and provide some...
To provide visibility into the operation of the code, there should be a generic way to opt-into metrics collection. This could take the form of a dynamic reporting function, which...
The split-point keys in index nodes don't need to be full decodable values, so to save space they can be truncated to the shortest key which still divides the sibling...
Currently the `integer-lexicoder` implementation supports 1-8 byte values via the respective byte/short/long primitive types in the JVM. The key encoding format is extensible to larger values up to 128 bytes,...
Many parts of the internal table machinery currently use lazy sequences to defer node fetches until they're needed to serve new elements in something like a `table/scan` call. It would...