Paul Frazee
Paul Frazee
Yeah I suggest we check the hash against current and noop if it's not a change. Then we add an option flag, `allowDuplicate` or something like that, which if true...
The hashing will occur in either situation, won't it? You have to hash any new writes to the hypercore
cc @maxogden @substack
Oh good question. If we assume monotonicity on the labels (semvers) could we use a B-Tree? @mafintosh
So, @mafintosh and I were discussing the lookup optimization he's adding rn† and @mafintosh realized that we could just as easily put version/checkpoint information in a file within the dat,...
The .datignore code is probably still good, we just need to choose a new filename convention now On Tue, May 19, 2020 at 5:00 AM Mathias Buus wrote: > We...
Yeah there's some history to this. I started work on [this pr](https://github.com/mafintosh/hyperdrive/pull/99) to make the list() API work more like you'd expect (just in terms of behavior, not performance). We...
> Another nice thing about the ipfs approach is that you can walk the directory tree iteratively without pulling down the entire directory structure first. That won't be doable by...
Also @substack it's an interesting idea to do recursive archives instead of an internal folder concept, like IPFS (basically) did. I wonder what the downside would be
@mafintosh yeah so even with optimizations about network lookup (eg we assume a peer serving the parent archive will have the subarchives, rather than hitting the discovery network each time)...