Eugene Klimov

Results 513 comments of Eugene Klimov

look like duplicate with https://github.com/AlexAkulov/clickhouse-backup/issues/470 try to use 1.4.9+ version

doens't reproduce on my side when you run `create & upload backuptest2` you upload full backup instead of increments use --diff-from (or --diff-from-remote) if you want upload backup as incremental...

@wzisheng issue still reproduce in your environment?

close issue cause issue is not reproduced and author doesn't tell anything else

Are you sure `clickhouse-backup server` still running? Could you share results for the following query? ```sql SHOW CREATE TABLE system.backup_list; SELECT * FROM system.backup_actions; ``` could you also share ```bash...

Could you share result of the following command ``` LOG_LEVEL=debug S3_DEBUG=true clichouse-backup list remote ``` Which s3 backend do you use AWS S3 or something else?

> There are only 2k rows data in system.backup_list system.backup_list depends on local cache in /tmp/ folder could you share `ls -la /tmp/`? 2k rows mean you need fetch 2000...

> Received exception from server (version 22.2.3): Code: 1000. DB::Exception: Received from chi-datalake-ck-cluster-0-0:9000. DB::Exception: Timeout. (POCO_EXCEPTION) (query: INSERT INTO system.backup_actions(command) VALUES('delete remote 20220703-20220714-shard_3-business-asset_log_local')) Error on processing query: Code: 75. DB::ErrnoException:...

according to ```bash HEAD /***bucket***/20220712-20220716-shard_0-business-dns_log_local/metadata.json HTTP/1.1 ``` returns ``` HTTP/1.1 404 Not Found ``` Are you sure your backup `20220712-20220716-shard_0-business-dns_log_local` upload was complete? looks like you have lot of broken...