Chris Lu
Chris Lu
@zuzuviewer should revert which commit and what is the reason?
your concern was addressed in https://github.com/seaweedfs/seaweedfs/commit/9f07bca9cc9a6bd26a29e567ed8bd9b2ffc8aea0
#1 sounds good. What are other places that needs the data size? Possible to limit the impact scope to jus the max volume count calculation?
this piece of code may need some adjustment: https://github.com/seaweedfs/seaweedfs/blob/3.67/weed/s3api/s3api_object_handlers.go#L154-L176
Data on both filer and volume server work together. If treating SeaweedFS as a cache, it needs to reset both filer data and volume data. In this case, need to...
> @chrislusf This happened after the commit > https://github.com/seaweedfs/seaweedfs/commit/d8c574a5ef1a811f9a0d447097d9edfcc0c1d84c This commit is between 3.87 ~ 3.88?
added a https://github.com/seaweedfs/seaweedfs/pull/6829 I was confused by the names: "fsync" and "syncWrite". "fsync" is actually implemented by "async writes", in order to amortize the cost to flush.
> I'm interested in implementing this as one of my first issues, @chrislusf can I claim this one? Thanks! Please explain your approach first.
One concern is that if one file is partially updated, for example, in a file that is split into 2 chunks, and only one chunk is updated, than the older...
S3 objects are always creation anyway. No need to use modification time for TTL?