acd_cli
acd_cli copied to clipboard
Store original file modification time
It would make sense if the original file modification time was preserved on upload.
It seems that the node API's modifiedDate
metadata attribute is used internally by ACD, and is ignored if supplied by the client on file upload. It behaves more like the POSIX ctime
attribute, in that it is updated on both content and metadata changes. This is unfortunate, as it would mean storing the original mtime in a custom metadata attribute.
It is useful to have the original modification time available, especially for its traditional use in synchronization. While the md5 checksum gives a technically more powerful sync capability, the ease of having a readily available size+mtime comparison remains a useful primary (or secondary) option. An example of this is the ubiquitous rsync, with its default "quick check" algorithm (size+mtime).
With FUSE support coming, storing the mtime would make even more sense.
I have not checked if other ACD clients do something similar. It would be great if they could agree on how such metadata would be stored.
At least for Photos / Videos this isn't really a problem, though, right? As far as I can see Amazon can display time of upload (Added
) as well as date/time the file was Taken
.
Related to this is the same issue regarding mtime when you use FUSE.
I'm not yet familiar with the codebase, but may be willing to jump in and help on this issue, as it is important to me (e.g. for rsync to work, with preserving timestamps).
I looked enough to find the utimens function in the acd_fuse.py source, and see the note there that it is non-functional.
Quick question before I dig in more: is it non-functional because nobody spent time on it yet, or is it non-functional because it isn't possible without some other dependency being taken care of?
Or is this issue still open because some users depend on the way it currently works?
it's possible to do that, just right now I'm doing an implementation of amazon cloud drive backend in S3QL FS and S3QL requires write metadata, Amazon cloud drive API can save properties and labels but the problem is the limitation of 255 chars per label/properties and 10 max label/properties but, yes it's possible and easy :)
Is your implementation on GitHub? Any notable differences between this and yours?
-Saqeb
On Tue, Mar 29, 2016 at 1:03 AM -0700, "segator" [email protected] wrote:
it's possible to do that, just right now I'm doing an implementation of amazon cloud drive backend in S3QL FS and S3QL requires write metadata, Amazon cloud drive API can save properties and labels but the problem is the limitation of 255 chars per label/properties and 10 max label/properties but, yes it's possible and easy :)
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub
not yet, it's under development but it's not the same as acd_cli, look on s3ql documentation
anyway it's easy to implement utimes and xattr on acd_cli And in the acd_cli api is implemented the posibility to use metadata (labels or properties) so it's only needed to implement utimens method using the existing api client :)
I've implemented this in the PR; this and other fixes enable rsync over acd_fuse.
I came here because I was wondering whether I'm re-uploading my whole data every time I use rsync
to find out that I actually do.
Since uploading via acd_cli ul
is messing with the FUSE mount, this is an absolutely critical feature for going through the mount instead. Thank you @bgemmill for your awesome PR.