Slind14
Slind14
> also wondering why a not-first backup takes that long. does the dedup not work or is it really lots of NEW data? There is more new data than 100MBit/s...
> borg manages caching, indexes and locking based on the repo id (which is unique and random). so you can run borg on the same machine, as the same user,...
> iirc there is some --upload-buffer (or so) option, maybe you can try using that to speed it up. the data is already compressed, hence we don't use any ---...
> another idea is not to use different repo for partitions of the data, but for different times. the majority of the new data is from the last 24 hours...
> --upload-buffer is about buffering, not compression. Sorry I quoted the wrong line. ;)
Unfortunately changing the buffer does not help. Restic added parallel uploads not too long ago, if borg had something similar it would be great. https://github.com/restic/restic/pull/3593 https://github.com/restic/restic/pull/3513
I see, thank you.
@raix do you have any suggestions on how to resolve this issue or go about debugging it?
I'm not sure if you see a solution, anyways, the chunk unloading tends to be more intense than the load once the active/ticking tile entities hit +15k in the world....
Sounds good. My main concern regarding the collosion fix (and entity list batch remove) was that there might be users that don't want or can't use the multi threading part...