Lakshmipathi Ganapathi
Lakshmipathi Ganapathi
Does your mount point has subvolumes?
Really sorry about the delay. I believe you are running https://github.com/Lakshmipathi/dduper#known-issues dduper works only with top-level default subvolume.
@RaymondSchnyder Can you share exact error message ? Share details like how big the directory is? does it large no.of small file or small no.of large file? dduper installed in...
Did you try using https://github.com/Lakshmipathi/dduper#changing-dedupe-chunk-size option ? That didn't work?
@broetchenrackete36 thanks. Could you please try below steps and tell the results? Lets first check whether `dump-csum` option working properly. If this fails then dduper won't work. ``` btrfs inspect-internal...
> Also I'm not sure why the total size deduped is 0 on the actual dedupe... Before you try above steps https://github.com/Lakshmipathi/dduper/issues/8#issuecomment-664772029 , can you get the latest `dduper` file...
>Thanks for the response. I applied the fix but I still get 0 for total deduped size. That's strange. If you run `sudo python2 ./dduper --device /dev/sda1 --dir /btrfs/ddtest/` and...
thanks for the details. Let me check whether dduper can support raid setup.
update: I tried above setup it gave me different errors: ``` bad tree block 22036480, bytenr mismatch, want=22036480, have=0 ERROR: cannot read chunk root unable to open /dev/sda bad tree...
The issue is related to blake2 csum. I don't know exactly why blake2 csum fetched for files with same content differs. Here is a simple way to reproduce the issue:...