gilbertchen
gilbertchen
I think per-user encryption is not only possible but also very easy to implement. Currently Duplicacy already implemented convergent encryption for chunks. That is, chunks are encrypted using their own...
> So there's not even a need for a shared key to derive the chunk-key from? There is a shared key, `hashKey`, to derive the HMAC hash of the chunk....
I think you can set the proxy url in the environment variable HTTP_PROXY or HTTPS_PROXY. This is a feature provided by Go's http client library.
Go's ssh library doesn't provide an option to connect via a proxy, so to make this work with sftp would require a significant amount of effort.
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/restore-multiple-files-web-ui/5320/17
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/restore-multiple-files-web-ui/5320/20
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/optimizations-for-large-datasets/7233/9
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/optimizations-for-large-datasets/7233/16
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/optimizations-for-large-datasets/7233/18
This pull request has been mentioned on **Duplicacy Forum**. There might be relevant details there: https://forum.duplicacy.com/t/optimizations-for-large-datasets/7233/17