Frank Wessels
Frank Wessels
@klauspost Thanks for the clear explanation, I will give it a try. One thing though is that essentially two mode are supported when pushing into the cloud, namely deduped and...
Current behaviour is a "quick" implementation, obviously a streaming approach is the correct way to do this. I will create an issue for this. Regarding 2) - magic number could...
You are saying: "just as easy to use the root hash to retrieve the chunk hashes from the backing store” This is what is already happening here: just as easy...
Using 1) you can directly get any portion of the file, eg. if you need (given 5 MB chunks) the range 10-15 MB, you would get the 3rd hash (0:2)...
Regarding 1) The amount of data would not be so much the issue, it is more that, for every read access to the object, you would first need to get...
Regarding the chunk-size, it is correct that this has to be a "known" property. In case it would ever get lost you could still figure it out by doing a...
I am familiar that Dat is more a peer-to-peer system, Noms is a newer project that I haven't studied much. Especially with content defined chunking (https://github.com/s3git/s3git/issues/20#issuecomment-285788205) it would make sense...
Here are some pointers for NEON stuff: - [BLAKE2_NEON_Compress32](https://github.com/weidai11/cryptopp/blob/master/blake2.cpp#L3465) - [BLAKE2_NEON_Compress64](https://github.com/weidai11/cryptopp/blob/master/blake2.cpp#L3971) - [Rust implementation of BLAKE2 with SIMD optimizations](https://github.com/cesarb/blake2-rfc#simd-optimization)
The benchmarking code is in https://github.com/minio/blake2b-simd/blob/master/benchmarks_test.go. Since we did the benchmark there have been development in Golang (stdlib) so, given some spare time, it would be good to redo the...
You are correct, was a bit too quick there. We will have modified these benchmark tests and captured the results in consecutieve runs in order to compare between old and...