Chris Lu

Results 761 comments of Chris Lu

Just copying the file should be fine. Seems the file was written by some process. I wonder whether it is a multi-threaded process.

Is it possible for multiple processes to access one file concurrently?

Most use cases are within a data center and slow network is rare.

I am not convinced all these are real issues. In many cases, current code can deal with data race. For example, https://github.com/seaweedfs/seaweedfs/issues/3510 is just a busy waiting for a `currentMaster`...

Removing retry can help what problem?

Would like to help! Please use the latest version ( 3.27 as of now ) to test, and report if any issue.

@romilbhardwaj you may want to run `weed server -s3` to start the master, volume server, filer, and s3 APIs at the same time. And then you can run `weed mount...

For your case, the seaweedfs cluster is the cache. The data is asynchronously replicated back to S3 via the `weed filer.remote.sync` process. You can also "uncache" some data via a...

Reading from remote S3 is also cached. Need to "uncache" via commands the same way. If size is a concern, you may want to start a volume server, by running...

For a running process, how to read the new list of masters?