Victor Efimov
Victor Efimov
IMHO It's Amazon problem https://forums.aws.amazon.com/thread.jspa?messageID=399111 I experience it with this my client https://github.com/vsespb/mt-aws-glacier when do high concurrency uploads.
Seems changing tcp_congestion_control from cubic to westwood helps a lot in my case ( I use ADSL) . Note that different Linux kernels have different default tcp_congestion_control. Also different mode...
Reading other user's file, allowing attacker to bypass OS file permission system by just connecting to localhost (allowed to connect for all local users) is a vulnerability. IMHO it should...
Hello. Thanks you for feature request. Yes, I am going to implement this feature. > Since the journal keeps track of what files we have retrieved and when, it shouldn't...
> so just a maximum rate (not inclusive of free amount) I actually meant "real" rate, not Amazon billing "rate". I.e. you specify 100Mb/hour - that means `mtglacier` will retrieve...
I mark this as enhancement. Most likely will implement some day. Not sure if this will be instant throughput (possible only if I implement bandwidth throttling) or average for last...
It was written before amazon introduced different methods. So I suspect standard. No way to choose yet.
> I don't think this violates any consistency with what's in glacier, since the mtime isn't stored anyway. problem that yes, `mtime` stored on Amazon servers too. Together with filenames....
Well, if that new, altered `mtime` will not affect any other logic except mtime+treehash checking that might work. i.e. if person will try to upload with `--detect=mtime` original,real mtime should...
I just want make it clear for end users that it's simply cache of `filename+mtime=treehash` (entry in this cache guarantees that if filename `filename` has mtime=`mtime` then we can assume...