clickhouse-operator icon indicating copy to clipboard operation
clickhouse-operator copied to clipboard

Operator access secret issue

Open chelomontilla opened this issue 2 years ago • 5 comments

Hi, I've been testing the operator and I've found maybe an issue with the access secret configuration. The version I'm using is the 0.18.5. I'm using the deploy/operator/clickhouse-operator-install-bundle.yaml and I have a custom configuration:

apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseOperatorConfiguration
metadata:
  name: chop-config
  namespace: clickhouse-operator
spec:
  annotation:
    exclude: []
    include: []
  clickhouse:
    access:
      password: ""
      port: 8123
      secret:
        name: clickhouse-operator-config
        namespace: clickhouse-operator
      username: ""
    configuration:
      file:
        path:
          common: config.d
          host: conf.d
          user: users.d
      network:
        hostRegexpTemplate: (chi-{chi}-[^.]+\d+-\d+|clickhouse\-{chi})\.{namespace}\.svc\.cluster\.local$
      user:
        default:
          networksIP:
          - ::1
          - 127.0.0.1
          password: default
          profile: default
          quota: default
  label:
    appendScope: "no"
    exclude: []
    include: []
  logger:
    alsologtostderr: "false"
    log_backtrace_at: ""
    logtostderr: "true"
    stderrthreshold: ""
    v: "1"
    vmodule: ""
  pod:
    terminationGracePeriod: 30
  reconcile:
    host:
      wait:
        exclude: true
        include: false
    runtime:
      threadsNumber: 10
    statefulSet:
      create:
        onFailure: ignore
      update:
        onFailure: rollback
        pollInterval: 5
        timeout: 300
  statefulSet:
    revisionHistoryLimit: 0
  template:
    chi:
      path: templates.d
  watch:
    namespaces:
    - clickhouse
---
apiVersion: v1
data:
  password: dGVzdA==
  username: dGVzdA==
kind: Secret
metadata:
  name: clickhouse-operator-config
  namespace: clickhouse-operator
type: Opaque

When I create the installation and try to access using the username/password configured on access section with the secret, I get an authentication error. The default username/password works. Are there something wrong with my configuration ?

chelomontilla avatar Jul 08 '22 08:07 chelomontilla

@chelomontilla, could you show your CHI as well?

alex-zaitsev avatar Jul 08 '22 14:07 alex-zaitsev

Sure.

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "clickhouse-installation"
spec:
  configuration:
    settings:
      logger/level: information
    clusters:
      - name: "lec"
        # Templates are specified for this cluster explicitly
        templates:
          podTemplate: pod-template-with-volumes
          clusterServiceTemplate: chi-cluster-service-template
        layout:
          shardsCount: 1
          replicasCount: 2
    zookeeper:
      nodes:
      - host: zookeeper

  defaults:
    templates:
      serviceTemplate: chi-service-template
      shardServiceTemplate: chi-service-template
      replicaServiceTemplate: chi-service-template

  templates:
    serviceTemplates:
      - name: chi-service-template
        spec:
          ports:
            - name: http
              port: 8123
              protocol: TCP
              targetPort: 8123
            - name: tcp
              port: 9000
              protocol: TCP
              targetPort: 9000
            - name: interserver
              port: 9009
              protocol: TCP
              targetPort: 9009
          type: ClusterIP
      - name: chi-cluster-service-template
        metadata:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-internal: "true"
        spec:
          ports:
            - name: http
              port: 8123
            - name: tcp
              port: 9000
          type: LoadBalancer
    podTemplates:
      - name: pod-template-with-volumes
        spec:
          securityContext:
            runAsUser: 65534
            runAsGroup: 65534
            fsGroup: 65534
          containers:
            - name: clickhouse
              image: clickhouse/clickhouse-server:22.3
              volumeMounts:
                - name: data-storage-vc-template
                  mountPath: /var/lib/clickhouse
                - name: log-storage-vc-template
                  mountPath: /var/log/clickhouse-server
              resources:
                limits:
                  memory: 8Gi
                requests:
                  cpu: 300m
                  memory: 2Gi
              securityContext:
                allowPrivilegeEscalation: false
    volumeClaimTemplates:
      - name: data-storage-vc-template
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 40Gi
      - name: log-storage-vc-template
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 2Gi

chelomontilla avatar Jul 11 '22 06:07 chelomontilla

Hi @chelomontilla , is the problem solved on your end? Have you tried with 0.20.x operator version?

alex-zaitsev avatar Jan 20 '23 15:01 alex-zaitsev

I think this is still an issue. I am running 23.12 clickhouse with operator 0.23 The file /var/lib/clickhouse/preprocessed_configs/users.xml has entries like

<myuser>
            <password from_env="CONFIGURATION_USERS_VAR_6_MYUSER_PASSWORD"/>
            <profile>default</profile>
            <quota>default</quota>
</myuser>

This is for a user with a password defined in a secret in the cluster configuration like

 configuration:
    users:
      myuser/password:
	valueFrom:
          secretKeyRef:
            name: user-secret
            key: password

But if you do an env you have an entry like CONFIGURATION_USERS_VAR_2_MYUSER_PASSWORD=<some password>

So it looks like there is a mismatch in the config created by the operator. This happens if you restart the clickhouse pod or update configuration.

ukclivecox avatar Feb 04 '24 16:02 ukclivecox

I see the same behavior happening in my cluster also

clickhouse.altinity.com/app: chop
clickhouse.altinity.com/chop: 0.23.0

kalirajanselvaraja avatar Mar 14 '24 13:03 kalirajanselvaraja

I am also facing this issue, If I provide the env variables directly (without secrets), it works, but referring it through secrets it fails. Even though inside pod I can see the environment variable when I run "env".

Amansethi967 avatar Mar 19 '24 10:03 Amansethi967

@Amansethi967

could you provide your kind: ClickHouseInstallation manifest and your clickhouse-operator version?

Slach avatar Mar 19 '24 11:03 Slach

@Slach

I am using 0.21.3 and facing issue with below CHI while using MINIO keys for archival disk from env variables, please check files section.

apiVersion: v1
kind: Secret
metadata:
  name: secret-env
  annotations:
    "helm.sh/hook-weight": "-10"
data:
  access_key: {{ .Values.secrets.access_key }}
  secret_key: {{ .Values.secrets.secret_key }}
---
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "demo"
spec:
  defaults:
    storageManagement:
      provisioner: StatefulSet
      reclaimPolicy: Retain
    templates:
      podTemplate: clickhouse-stable
      serviceTemplate: svc-template
      dataVolumeClaimTemplate: clickhouse-data-volume
  configuration:
    clusters:
      - name: "demo"
        layout:
          shardsCount: 1
          replicasCount: 1
    zookeeper:
        nodes:
        - host: kafka-cp-zookeeper-headless
          port: 2181
    users:
        demoreadonly/profile: readonly
        demo/password: demoPassword
        demo/profile: default
        demo/quota: default
        demo/networks/ip:
            - 0.0.0.0/0
            - ::/0
    profiles:
      demoreadonly/readonly: "1"
      # server level settings can be set here
      demo/data_type_default_nullable: 1 # data types in column definitions are set to Nullable by default
      demo/insert_distributed_sync: 1 # Data is inserted in synchronous mode
      demo/mutations_sync: 2 # query waits for all mutations to complete on all replicas
      demo/parallel_distributed_insert_select: 2 # SELECT and INSERT will be executed on each shard in parallel
      demo/distributed_product_mode: allow # Allows the use of these types of subqueries
    files:
      config.d/Data_Retention_config.xml: |-
        <clickhouse>
           <storage_configuration>
              <disks>
                 <archival_disk>
                    <type>s3_plain</type>
                    <endpoint>http://minio:9000/archives/</endpoint>
                    <access_key_id from_env="MINIO_ACCESS_KEY"></access_key_id>
                    <secret_access_key from_env="MINIO_SECRET_KEY"></secret_access_key>
                </archival_disk>
              </disks>
              <policies>
                 <archival_volume>
                    <volumes>
                      <main>
                        <disk>archival_disk</disk>
                      </main>
                    </volumes>
                 </archival_volume>
              </policies>
           </storage_configuration>
        </clickhouse>
      config.d/max_suspicious_broken_parts.xml: |-
        <?xml version="1.0"?>
        <yandex>
             <merge_tree>
                 <max_suspicious_broken_parts>200</max_suspicious_broken_parts>
             </merge_tree>
        </yandex>
      config.d/log_rotation.xml: |-
        <clickhouse>
            <timezone>UTC</timezone>
            <logger>
                <level>information</level>
                <log>/var/log/clickhouse-server/clickhouse-server.log</log>
                <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
                <size>100M</size>
                <count>5</count>
                <console>1</console>
            </logger>
        </clickhouse>
  templates:
    serviceTemplates:
      - name: svc-template
        generateName: clickhouse
        metadata:
        spec:
          ports:
            - name: http
              port: 8123
            - name: tcp
              port: 9000
          type: ClusterIP
    podTemplates:
    - name: clickhouse-stable
      metadata:
        labels:
          app: chdb
      spec:
        affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      - key: "app"
                        operator: In
                        values:
                          - "chdb"
                  topologyKey: "kubernetes.io/hostname"
            # Specify Pod affinity to nodes in specified availability zone
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: "ch"
                        operator: In
                        values:
                          - "True"
        containers:
        - name: clickhouse
          image: clickhouse/clickhouse-server:23.12.4
          env:          
          - name: MINIO_ACCESS_KEY
            valueFrom:
              secretKeyRef:
                name: secret-env
                key: access_key
          - name: MINIO_SECRET_KEY
            valueFrom:
              secretKeyRef:
                name: secret-env
                key: secret_key
          ports:
          - containerPort: 9000
            name: db
          - containerPort: 8123
            name: httpdb
          lifecycle:
            postStart:
              exec:
                command: ["/bin/bash", "-c", "mkdir -p /var/lib/clickhouse/flags; touch /var/lib/clickhouse/flags/force_restore_data"]
          volumeMounts:
          - name: clickhouse-data-volume
            mountPath: /var/lib/clickhouse/
    volumeClaimTemplates:
      - name: clickhouse-data-volume
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 3Gi

Amansethi967 avatar Mar 19 '24 11:03 Amansethi967

@Amansethi967 please, upgrade your clickhouse-operator to 0.23.4 and please remember upgrade CRD

Slach avatar Mar 19 '24 12:03 Slach

@Amansethi967 could you share?

kubectl exec -n <your-namespace> chi-demo-demo-0-0-0 -o yaml
kubectl exec -n <your-namespace> chi-demo-demo-0-0-0 -- echo $MINIO_ACCESS_KEY

Slach avatar Mar 19 '24 12:03 Slach

@Slach I upgraded operator and tried still same issue. wont be able to echo the env var, since the pod is going into error state. However, I have tested by mounting the variables with below format and that works without any issue.

This way it works.

env:
- name: MINIO_ACCESS_KEY
  value: "Gdgiegbdjvdiuv"
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2024-03-19T12:46:04Z"
  generateName: chi-demo-demo-0-0-
  labels:
    app: chdb
    clickhouse.altinity.com/app: chop
    clickhouse.altinity.com/chi: demo
    clickhouse.altinity.com/cluster: demo
    clickhouse.altinity.com/namespace: default
    clickhouse.altinity.com/ready: "no"
    clickhouse.altinity.com/replica: "0"
    clickhouse.altinity.com/shard: "0"
    controller-revision-hash: chi-demo-demo-0-0-56c45fbb6d
    statefulset.kubernetes.io/pod-name: chi-demo-demo-0-0-0
  name: chi-demo-demo-0-0-0
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: chi-demo-demo-0-0
    uid: 6f41b55e-cfe9-4d51-b024-8e79f131404a
  resourceVersion: "69258603"
  uid: 8f7a8909-7850-47b1-a1f2-316f53b8d433
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: ch
            operator: In
            values:
            - "True"
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - chdb
        topologyKey: kubernetes.io/hostname
  containers:
  - env:
    - name: MINIO_ACCESS_KEY
      valueFrom:
        secretKeyRef:
          key: access_key
          name: secret-env
    - name: MINIO_SECRET_KEY
      valueFrom:
        secretKeyRef:
          key: secret_key
          name: secret-env
    image: clickhouse/clickhouse-server:23.12.4
    imagePullPolicy: IfNotPresent
    lifecycle:
      postStart:
        exec:
          command:
          - /bin/bash
          - -c
          - mkdir -p /var/lib/clickhouse/flags; touch /var/lib/clickhouse/flags/force_restore_data
    livenessProbe:
      failureThreshold: 10
      httpGet:
        path: /ping
        port: http
        scheme: HTTP
      initialDelaySeconds: 60
      periodSeconds: 3
      successThreshold: 1
      timeoutSeconds: 1
    name: clickhouse
    ports:
    - containerPort: 9000
      name: db
      protocol: TCP
    - containerPort: 8123
      name: httpdb
      protocol: TCP
    - containerPort: 9000
      name: tcp
      protocol: TCP
    - containerPort: 8123
      name: http
      protocol: TCP
    - containerPort: 9009
      name: interserver
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /ping
        port: http
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 3
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/lib/clickhouse/
      name: clickhouse-data-volume
    - mountPath: /etc/clickhouse-server/config.d/
      name: chi-demo-common-configd
    - mountPath: /etc/clickhouse-server/users.d/
      name: chi-demo-common-usersd
    - mountPath: /etc/clickhouse-server/conf.d/
      name: chi-demo-deploy-confd-demo-0-0
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-7gcqz
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostAliases:
  - hostnames:
    - chi-demo-demo-0-0
    ip: 127.0.0.1
  hostname: chi-demo-demo-0-0-0
  nodeName: e2e-88-118
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  subdomain: chi-demo-demo-0-0
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: clickhouse-data-volume
    persistentVolumeClaim:
      claimName: clickhouse-data-volume-chi-demo-demo-0-0-0
  - configMap:
      defaultMode: 420
      name: chi-demo-common-configd
    name: chi-demo-common-configd
  - configMap:
      defaultMode: 420
      name: chi-demo-common-usersd
    name: chi-demo-common-usersd
  - configMap:
      defaultMode: 420
      name: chi-demo-deploy-confd-demo-0-0
    name: chi-demo-deploy-confd-demo-0-0
  - name: kube-api-access-7gcqz
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-03-19T12:46:04Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-03-19T12:46:04Z"
    message: 'containers with unready status: [clickhouse]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2024-03-19T12:46:04Z"
    message: 'containers with unready status: [clickhouse]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2024-03-19T12:46:04Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://471a6ac42503de8a1ad9366c030ddc921c21407435439ad7c6208d618418dc79
    image: docker.io/clickhouse/clickhouse-server:23.12
    imageID: docker.io/clickhouse/clickhouse-server@sha256:3254ee3d1894ff9bae71d27a126125638c4219b87c7ebfea0d4cf97888229a43
    lastState:
      terminated:
        containerID: containerd://471a6ac42503de8a1ad9366c030ddc921c21407435439ad7c6208d618418dc79
        exitCode: 243
        finishedAt: "2024-03-19T12:47:14Z"
        reason: Error
        startedAt: "2024-03-19T12:47:12Z"
    name: clickhouse
    ready: false
    restartCount: 3
    started: false
    state:
      waiting:
        message: back-off 40s restarting failed container=clickhouse pod=chi-demo-demo-0-0-0_default(8f7a8909-7850-47b1-a1f2-316f53b8d433)
        reason: CrashLoopBackOff
  hostIP: 172.16.67.29
  phase: Running
  podIP: 10.32.0.198
  podIPs:
  - ip: 10.32.0.198
  qosClass: BestEffort
  startTime: "2024-03-19T12:46:04Z"

Amansethi967 avatar Mar 19 '24 12:03 Amansethi967

wont be able to echo the env var, since the pod is going into error state.

Could you share logs?

kubectl logs -n default chi-demo-demo-0-0-0 -c clickhouse --since=24h

Slach avatar Mar 20 '24 04:03 Slach

@Slach these are the logs.

kubectl logs -n default chi-demo-demo-0-0-0 -c clickhouse --since=24h
ClickHouse Database directory appears to contain a database; Skipping initialization
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-hostname-ports.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-macros.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-zookeeper.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-01-listen.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-02-logger.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-03-query_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-04-part_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-05-trace_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/Data_Retention_config.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/chop-generated-remote_servers.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/log_rotation.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/max_suspicious_broken_parts.xml'.
Logging information to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
2024.03.20 05:44:42.825183 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2024.03.20 05:44:42.974180 [ 1 ] {} <Information> Application: Starting ClickHouse 23.12.4.15 (revision: 54481, git hash: 4233d111d2023fdb43a677fc7e986af25c00edb0, build id: 77F12B9E80533FF63F2348020FBC2AC58B98E258), PID 1
2024.03.20 05:44:42.974415 [ 1 ] {} <Information> Application: starting up
2024.03.20 05:44:42.974447 [ 1 ] {} <Information> Application: OS name: Linux, version: 5.15.0-97-generic, architecture: x86_64
2024.03.20 05:44:42.984318 [ 1 ] {} <Information> Application: Available RAM: 57.47 GiB; physical cores: 12; logical cores: 12.
2024.03.20 05:44:42.988131 [ 1 ] {} <Warning> Context: Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacct
2024.03.20 05:44:43.454252 [ 1 ] {} <Information> Application: Integrity check of the executable successfully passed (checksum: 53B8BE58207E028D513CC585088B0E63)
2024.03.20 05:44:43.454407 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2024.03.20 05:44:43.468261 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 51.72 GiB (57.47 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2024.03.20 05:44:43.468335 [ 1 ] {} <Information> Application: Setting merges_mutations_memory_usage_soft_limit was set to 28.74 GiB (57.47 GiB available * 0.50 merges_mutations_memory_usage_to_ram_ratio)
2024.03.20 05:44:43.468345 [ 1 ] {} <Information> Application: Merges and mutations memory limit is set to 28.74 GiB
2024.03.20 05:44:43.469617 [ 1 ] {} <Information> BackgroundSchedulePool/BgBufSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 05:44:43.471940 [ 1 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 512 threads
2024.03.20 05:44:43.613218 [ 1 ] {} <Information> BackgroundSchedulePool/BgMBSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 05:44:43.615669 [ 1 ] {} <Information> BackgroundSchedulePool/BgDistSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 05:44:43.631013 [ 1 ] {} <Information> CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again.
2024.03.20 05:44:43.633292 [ 1 ] {} <Warning> Application: Listen [0.0.0.0]:9009 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9009 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.633599 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://[::]:9009
2024.03.20 05:44:43.653279 [ 1 ] {} <Information> Context: Initialized background executor for merges and mutations with num_threads=16, num_tasks=32, scheduling_policy=round_robin
2024.03.20 05:44:43.656137 [ 1 ] {} <Information> Context: Initialized background executor for move operations with num_threads=8, num_tasks=8
2024.03.20 05:44:43.660508 [ 1 ] {} <Information> Context: Initialized background executor for fetches with num_threads=16, num_tasks=16
2024.03.20 05:44:43.666901 [ 1 ] {} <Information> Context: Initialized background executor for common operations (e.g. clearing old parts) with num_threads=8, num_tasks=8
2024.03.20 05:44:43.670210 [ 1 ] {} <Information> DNSCacheUpdater: Update period 15 seconds
2024.03.20 05:44:43.670278 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2024.03.20 05:44:43.672634 [ 1 ] {} <Information> DatabaseAtomic (system): Metadata processed, database system has 0 tables and 0 dictionaries in total.
2024.03.20 05:44:43.672733 [ 1 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 0.000320221 sec
2024.03.20 05:44:43.706291 [ 1 ] {} <Information> DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal.
2024.03.20 05:44:43.706885 [ 1 ] {} <Information> DatabaseAtomic (default): Metadata processed, database default has 0 tables and 0 dictionaries in total.
2024.03.20 05:44:43.706918 [ 1 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 9.0157e-05 sec
2024.03.20 05:44:43.706975 [ 1 ] {} <Information> loadMetadata: Start synchronous loading of databases
2024.03.20 05:44:43.708039 [ 1 ] {} <Information> UserDefinedSQLObjectsLoaderFromDisk: Loading user defined objects from /var/lib/clickhouse/user_defined/
2024.03.20 05:44:43.708242 [ 1 ] {} <Information> Application: Tasks stats provider: procfs
2024.03.20 05:44:43.708259 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2024.03.20 05:44:43.708818 [ 1 ] {} <Warning> Application: Listen [::]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.709105 [ 1 ] {} <Warning> Application: Listen [::]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.737397 [ 1 ] {} <Warning> Application: Listen [0.0.0.0]:8123 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:8123 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.737641 [ 1 ] {} <Warning> Application: Listen [0.0.0.0]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.737764 [ 1 ] {} <Warning> Application: Listen [0.0.0.0]:9000 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9000 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.737921 [ 1 ] {} <Warning> Application: Listen [0.0.0.0]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.738046 [ 1 ] {} <Warning> Application: Listen [0.0.0.0]:9004 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9004 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.738205 [ 1 ] {} <Warning> Application: Listen [0.0.0.0]:9005 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9005 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 05:44:43.738254 [ 1 ] {} <Information> CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again.
2024.03.20 05:44:43.758073 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:43.758218 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:43.758249 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: minio-api-nodeport.vsmaps:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:43.758287 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:43.758337 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:43.760747 [ 1 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:43.760907 [ 1 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:43.760952 [ 1 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:43.760982 [ 1 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:43.763197 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:43.763281 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:43.763304 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:43.763331 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:43.763392 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:43.765530 [ 1 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:43.772045 [ 1 ] {} <Information> Application: Shutting down storages.
2024.03.20 05:44:43.781903 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:43.782089 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:43.782127 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:43.782209 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:43.782242 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:43.783713 [ 688 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:43.783873 [ 688 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:43.783921 [ 688 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:43.783957 [ 688 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:43.786195 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:43.786337 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:43.786382 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:43.786432 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:43.786485 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:43.787984 [ 688 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:43.789882 [ 688 ] {} <Error> void DB::SystemLog<DB::TraceLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::TraceLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:44.728555 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:44.728658 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:44.728751 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:44.728779 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:44.728853 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:44.729688 [ 698 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:44.729735 [ 698 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:44.729747 [ 698 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:44.729847 [ 698 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:44.730629 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:44.730677 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:44.730688 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:44.730847 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:44.730892 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:44.731265 [ 698 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:44.733198 [ 698 ] {} <Error> void DB::SystemLog<DB::MetricLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::MetricLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:44.739870 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:44.739949 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:44.739966 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:44.739982 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:44.740006 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:44.740951 [ 697 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:44.741117 [ 697 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:44.741181 [ 697 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:44.741268 [ 697 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:44.742141 [ 702 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 05:44:44.742190 [ 702 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 05:44:44.742270 [ 702 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 05:44:44.742295 [ 702 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 05:44:44.742314 [ 702 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 05:44:44.742662 [ 697 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:44.743922 [ 697 ] {} <Error> void DB::SystemLog<DB::BlobStorageLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::BlobStorageLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:44.886238 [ 1 ] {} <Information> Application: Closed all listening sockets.
2024.03.20 05:44:44.886333 [ 1 ] {} <Information> Application: Closed connections to servers for tables.
2024.03.20 05:44:44.887352 [ 1 ] {} <Information> Application: Waiting for background threads
2024.03.20 05:44:44.999870 [ 1 ] {} <Information> Application: Background threads finished in 112 ms
2024.03.20 05:44:45.002476 [ 1 ] {} <Error> Application: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fc2bd4e3609 in ?
10. ? @ 0x00007fc2bd408353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 05:44:45.002684 [ 1 ] {} <Information> Application: shutting down
2024.03.20 05:44:45.004317 [ 54 ] {} <Information> BaseDaemon: Stop SignalListener thread

Amansethi967 avatar Mar 20 '24 05:03 Amansethi967

ok. looks like

2024.03.20 05:44:44.743922 [ 697 ] {} <Error> void DB::SystemLog<DB::BlobStorageLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::BlobStorageLogElement]: 

Code: 499. DB::Exception: Message: , 
bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: 

While checking access for disk archival_disk. (S3_ERROR), 
Stack trace (when copying this message, always include the lines below):

doesn't allow initialize clickhouse-server

let's change ClickHouseInstallation remove

   lifecycle:
            postStart:
              exec:
                command: ["/bin/bash", "-c", "mkdir -p /var/lib/clickhouse/flags; touch /var/lib/clickhouse/flags/force_restore_data"]

and add command to spec.templates.PodTemplates.containers[]

        containers:
        - name: clickhouse
          image: clickhouse/clickhouse-server:23.12.4
          command:
          - /bin/bash
          - -xec
          - env | grep MINIO; clickhouse-server -c /etc/clickhouse-server/config.xml

and share logs again

Slach avatar Mar 20 '24 06:03 Slach

@Slach still same

kubectl logs -f chi-demo-demo-0-0-0 
+ env
+ grep MINIO
MINIO_ACCESS_KEY_TEST=testingenv
MINIO_SECRET_KEY=minio123
MINIO_ACCESS_KEY=minio
+ su -c 'clickhouse-server -C /etc/clickhouse-server/config.xml' clickhouse
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-hostname-ports.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-macros.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-zookeeper.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-01-listen.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-02-logger.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-03-query_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-04-part_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-05-trace_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/Data_Retention_config.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/chop-generated-remote_servers.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/log_rotation.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/max_suspicious_broken_parts.xml'.
Logging information to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
2024.03.20 09:06:34.296372 [ 10 ] {} <Information> Application: Will watch for the process with pid 11
2024.03.20 09:06:34.296500 [ 11 ] {} <Information> Application: Forked a child process to watch
2024.03.20 09:06:34.297346 [ 11 ] {} <Information> SentryWriter: Sending crash reports is disabled
2024.03.20 09:06:34.390682 [ 11 ] {} <Information> Application: Starting ClickHouse 23.12.4.15 (revision: 54481, git hash: 4233d111d2023fdb43a677fc7e986af25c00edb0, build id: 77F12B9E80533FF63F2348020FBC2AC58B98E258), PID 11
2024.03.20 09:06:34.391043 [ 11 ] {} <Information> Application: starting up
2024.03.20 09:06:34.391101 [ 11 ] {} <Information> Application: OS name: Linux, version: 5.15.0-97-generic, architecture: x86_64
2024.03.20 09:06:34.399371 [ 11 ] {} <Information> Application: Available RAM: 57.47 GiB; physical cores: 12; logical cores: 12.
2024.03.20 09:06:34.400398 [ 11 ] {} <Warning> Context: Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacct
2024.03.20 09:06:34.640474 [ 11 ] {} <Information> Application: Integrity check of the executable successfully passed (checksum: 53B8BE58207E028D513CC585088B0E63)
2024.03.20 09:06:34.640628 [ 11 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2024.03.20 09:06:34.658604 [ 11 ] {} <Information> Application: Setting max_server_memory_usage was set to 51.72 GiB (57.47 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2024.03.20 09:06:34.658670 [ 11 ] {} <Information> Application: Setting merges_mutations_memory_usage_soft_limit was set to 28.74 GiB (57.47 GiB available * 0.50 merges_mutations_memory_usage_to_ram_ratio)
2024.03.20 09:06:34.658681 [ 11 ] {} <Information> Application: Merges and mutations memory limit is set to 28.74 GiB
2024.03.20 09:06:34.659672 [ 11 ] {} <Information> BackgroundSchedulePool/BgBufSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 09:06:34.663046 [ 11 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 512 threads
2024.03.20 09:06:34.820902 [ 11 ] {} <Information> BackgroundSchedulePool/BgMBSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 09:06:34.825992 [ 11 ] {} <Information> BackgroundSchedulePool/BgDistSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 09:06:34.828954 [ 11 ] {} <Information> CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again.
2024.03.20 09:06:34.830228 [ 11 ] {} <Warning> Application: Listen [0.0.0.0]:9009 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9009 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.830521 [ 11 ] {} <Information> Application: Listening for replica communication (interserver): http://[::]:9009
2024.03.20 09:06:34.843136 [ 11 ] {} <Information> Context: Initialized background executor for merges and mutations with num_threads=16, num_tasks=32, scheduling_policy=round_robin
2024.03.20 09:06:34.844314 [ 11 ] {} <Information> Context: Initialized background executor for move operations with num_threads=8, num_tasks=8
2024.03.20 09:06:34.846094 [ 11 ] {} <Information> Context: Initialized background executor for fetches with num_threads=16, num_tasks=16
2024.03.20 09:06:34.846931 [ 11 ] {} <Information> Context: Initialized background executor for common operations (e.g. clearing old parts) with num_threads=8, num_tasks=8
2024.03.20 09:06:34.856490 [ 11 ] {} <Information> DNSCacheUpdater: Update period 15 seconds
2024.03.20 09:06:34.856626 [ 11 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2024.03.20 09:06:34.859741 [ 11 ] {} <Information> DatabaseAtomic (system): Metadata processed, database system has 0 tables and 0 dictionaries in total.
2024.03.20 09:06:34.859850 [ 11 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 0.000266938 sec
2024.03.20 09:06:34.890765 [ 11 ] {} <Information> DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal.
2024.03.20 09:06:34.891350 [ 11 ] {} <Information> DatabaseAtomic (default): Metadata processed, database default has 0 tables and 0 dictionaries in total.
2024.03.20 09:06:34.891391 [ 11 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 8.4924e-05 sec
2024.03.20 09:06:34.891443 [ 11 ] {} <Information> loadMetadata: Start synchronous loading of databases
2024.03.20 09:06:34.892202 [ 11 ] {} <Information> UserDefinedSQLObjectsLoaderFromDisk: Loading user defined objects from /var/lib/clickhouse/user_defined/
2024.03.20 09:06:34.892509 [ 11 ] {} <Information> Application: Tasks stats provider: procfs
2024.03.20 09:06:34.892540 [ 11 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2024.03.20 09:06:34.893175 [ 11 ] {} <Warning> Application: Listen [::]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.893400 [ 11 ] {} <Warning> Application: Listen [::]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.996007 [ 11 ] {} <Warning> Application: Listen [0.0.0.0]:8123 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:8123 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.996147 [ 11 ] {} <Warning> Application: Listen [0.0.0.0]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.996398 [ 11 ] {} <Warning> Application: Listen [0.0.0.0]:9000 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9000 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.996545 [ 11 ] {} <Warning> Application: Listen [0.0.0.0]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.996686 [ 11 ] {} <Warning> Application: Listen [0.0.0.0]:9004 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9004 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.996837 [ 11 ] {} <Warning> Application: Listen [0.0.0.0]:9005 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9005 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 09:06:34.996905 [ 11 ] {} <Information> CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again.
2024.03.20 09:06:35.023633 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.023939 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.024043 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: minio-api-nodeport.vsmaps:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.024148 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.024255 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.027125 [ 11 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.027220 [ 11 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.027241 [ 11 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.027307 [ 11 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.029930 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.030241 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.030384 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.031209 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.031371 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.032154 [ 11 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:35.041152 [ 11 ] {} <Information> Application: Shutting down storages.
2024.03.20 09:06:35.049297 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.049484 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.049552 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.049600 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.049654 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.052973 [ 645 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.053062 [ 645 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.053090 [ 645 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.053111 [ 645 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.055577 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.055769 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.055934 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.056011 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.056071 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.056541 [ 645 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:35.058018 [ 645 ] {} <Error> void DB::SystemLog<DB::TraceLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::TraceLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:35.890092 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.890211 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.890233 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.890258 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.890288 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.891353 [ 653 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.891430 [ 653 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.891452 [ 653 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.891476 [ 653 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.892962 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.893107 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.893135 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.893156 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.893185 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.897550 [ 653 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:35.899759 [ 653 ] {} <Error> void DB::SystemLog<DB::MetricLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::MetricLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:35.908086 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.908198 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.908223 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.908249 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.908288 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.909543 [ 649 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.909827 [ 649 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.909904 [ 649 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.909926 [ 649 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.911396 [ 660 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 09:06:35.911498 [ 660 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 09:06:35.912086 [ 660 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 09:06:35.912119 [ 660 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 09:06:35.912144 [ 660 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 09:06:35.912469 [ 649 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:35.913848 [ 649 ] {} <Error> void DB::SystemLog<DB::BlobStorageLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::BlobStorageLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:36.084944 [ 11 ] {} <Information> Application: Closed all listening sockets.
2024.03.20 09:06:36.085049 [ 11 ] {} <Information> Application: Closed connections to servers for tables.
2024.03.20 09:06:36.087965 [ 11 ] {} <Information> Application: Waiting for background threads
2024.03.20 09:06:36.156439 [ 11 ] {} <Information> Application: Background threads finished in 68 ms
2024.03.20 09:06:36.160660 [ 11 ] {} <Error> Application: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007f4be0fe9609 in ?
10. ? @ 0x00007f4be0f0e353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 09:06:36.161676 [ 11 ] {} <Information> Application: shutting down
2024.03.20 09:06:36.164886 [ 12 ] {} <Information> BaseDaemon: Stop SignalListener thread
2024.03.20 09:06:36.216241 [ 10 ] {} <Information> Application: Child process exited normally with code 243.

Amansethi967 avatar Mar 20 '24 09:03 Amansethi967

ok. evnironment variables passed to container let's ensure it resolved properly in config change manifest

     containers:
        - name: clickhouse
          image: clickhouse/clickhouse-server:23.12.4
          command:
          - /bin/bash
          - -xec
          - env | grep MINIO; grep -C 20 archival_disk -r /var/lib/clickhouse/preprocessed_configs/; clickhouse-server -c /etc/clickhouse-server/config.xml || grep -C 20 archival_disk -r /var/lib/clickhouse/preprocessed_configs/

and share logs again

Slach avatar Mar 20 '24 10:03 Slach

@Slach I see in the logs config is being populated with correct access and secret keys, but the error points to an error related to S3 only. Sharing the logs again.

+ env
+ grep MINIO
MINIO_ACCESS_KEY_TEST=testingenv
MINIO_SECRET_KEY=minio123
MINIO_ACCESS_KEY=minio
+ su -c 'grep -C 20 archival_disk -r /var/lib/clickhouse/preprocessed_configs/' clickhouse
/var/lib/clickhouse/preprocessed_configs/config.xml-        </node>
/var/lib/clickhouse/preprocessed_configs/config.xml-    </zookeeper>
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    <!-- Listen wildcard address to allow accepting connections from other containers and host network. -->
/var/lib/clickhouse/preprocessed_configs/config.xml-    <listen_host>::</listen_host>
/var/lib/clickhouse/preprocessed_configs/config.xml-    <listen_host>0.0.0.0</listen_host>
/var/lib/clickhouse/preprocessed_configs/config.xml-    <listen_try>1</listen_try>
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-   <storage_configuration>
/var/lib/clickhouse/preprocessed_configs/config.xml-      <disks>
/var/lib/clickhouse/preprocessed_configs/config.xml:         <archival_disk>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <type>s3_plain</type>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <endpoint>http://minio-api-nodeport.vsmaps:9000/archives/</endpoint>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <access_key_id>minio
/var/lib/clickhouse/preprocessed_configs/config.xml-</access_key_id>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <secret_access_key>minio123
/var/lib/clickhouse/preprocessed_configs/config.xml-</secret_access_key>
/var/lib/clickhouse/preprocessed_configs/config.xml:        </archival_disk>
/var/lib/clickhouse/preprocessed_configs/config.xml-      </disks>
/var/lib/clickhouse/preprocessed_configs/config.xml-      <policies>
/var/lib/clickhouse/preprocessed_configs/config.xml-         <archival_volume>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <volumes>
/var/lib/clickhouse/preprocessed_configs/config.xml-              <main>
/var/lib/clickhouse/preprocessed_configs/config.xml:                <disk>archival_disk</disk>
/var/lib/clickhouse/preprocessed_configs/config.xml-              </main>
/var/lib/clickhouse/preprocessed_configs/config.xml-            </volumes>
/var/lib/clickhouse/preprocessed_configs/config.xml-         </archival_volume>
/var/lib/clickhouse/preprocessed_configs/config.xml-      </policies>
/var/lib/clickhouse/preprocessed_configs/config.xml-   </storage_configuration>
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    <timezone>UTC</timezone>
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-     <merge_tree>
/var/lib/clickhouse/preprocessed_configs/config.xml-         <max_suspicious_broken_parts>200</max_suspicious_broken_parts>
/var/lib/clickhouse/preprocessed_configs/config.xml-     </merge_tree>
/var/lib/clickhouse/preprocessed_configs/config.xml-</clickhouse>
+ su -c 'clickhouse-server -C /etc/clickhouse-server/config.xml || grep -C 20 archival_disk -r /var/lib/clickhouse/preprocessed_configs/' clickhouse
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-hostname-ports.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-macros.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-zookeeper.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-01-listen.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-02-logger.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-03-query_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-04-part_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-05-trace_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/Data_Retention_config.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/chop-generated-remote_servers.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/log_rotation.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/max_suspicious_broken_parts.xml'.
Logging information to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
2024.03.20 11:56:58.920226 [ 13 ] {} <Information> Application: Will watch for the process with pid 14
2024.03.20 11:56:58.920301 [ 14 ] {} <Information> Application: Forked a child process to watch
2024.03.20 11:56:58.920948 [ 14 ] {} <Information> SentryWriter: Sending crash reports is disabled
2024.03.20 11:56:59.010770 [ 14 ] {} <Information> Application: Starting ClickHouse 23.12.4.15 (revision: 54481, git hash: 4233d111d2023fdb43a677fc7e986af25c00edb0, build id: 77F12B9E80533FF63F2348020FBC2AC58B98E258), PID 14
2024.03.20 11:56:59.011084 [ 14 ] {} <Information> Application: starting up
2024.03.20 11:56:59.011124 [ 14 ] {} <Information> Application: OS name: Linux, version: 5.15.0-97-generic, architecture: x86_64
2024.03.20 11:56:59.019447 [ 14 ] {} <Information> Application: Available RAM: 57.47 GiB; physical cores: 12; logical cores: 12.
2024.03.20 11:56:59.020655 [ 14 ] {} <Warning> Context: Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacct
2024.03.20 11:56:59.233061 [ 14 ] {} <Information> Application: Integrity check of the executable successfully passed (checksum: 53B8BE58207E028D513CC585088B0E63)
2024.03.20 11:56:59.233196 [ 14 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2024.03.20 11:56:59.245944 [ 14 ] {} <Information> Application: Setting max_server_memory_usage was set to 51.72 GiB (57.47 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2024.03.20 11:56:59.246013 [ 14 ] {} <Information> Application: Setting merges_mutations_memory_usage_soft_limit was set to 28.74 GiB (57.47 GiB available * 0.50 merges_mutations_memory_usage_to_ram_ratio)
2024.03.20 11:56:59.246021 [ 14 ] {} <Information> Application: Merges and mutations memory limit is set to 28.74 GiB
2024.03.20 11:56:59.246822 [ 14 ] {} <Information> BackgroundSchedulePool/BgBufSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 11:56:59.248376 [ 14 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 512 threads
2024.03.20 11:56:59.384947 [ 14 ] {} <Information> BackgroundSchedulePool/BgMBSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 11:56:59.386970 [ 14 ] {} <Information> BackgroundSchedulePool/BgDistSchPool: Create BackgroundSchedulePool with 16 threads
2024.03.20 11:56:59.388476 [ 14 ] {} <Information> CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again.
2024.03.20 11:56:59.393205 [ 14 ] {} <Warning> Application: Listen [0.0.0.0]:9009 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9009 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.393588 [ 14 ] {} <Information> Application: Listening for replica communication (interserver): http://[::]:9009
2024.03.20 11:56:59.405109 [ 14 ] {} <Information> Context: Initialized background executor for merges and mutations with num_threads=16, num_tasks=32, scheduling_policy=round_robin
2024.03.20 11:56:59.414641 [ 14 ] {} <Information> Context: Initialized background executor for move operations with num_threads=8, num_tasks=8
2024.03.20 11:56:59.424273 [ 14 ] {} <Information> Context: Initialized background executor for fetches with num_threads=16, num_tasks=16
2024.03.20 11:56:59.425316 [ 14 ] {} <Information> Context: Initialized background executor for common operations (e.g. clearing old parts) with num_threads=8, num_tasks=8
2024.03.20 11:56:59.428782 [ 14 ] {} <Information> DNSCacheUpdater: Update period 15 seconds
2024.03.20 11:56:59.428887 [ 14 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2024.03.20 11:56:59.429510 [ 14 ] {} <Information> DatabaseAtomic (system): Metadata processed, database system has 0 tables and 0 dictionaries in total.
2024.03.20 11:56:59.429543 [ 14 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 0.000139266 sec
2024.03.20 11:56:59.453803 [ 14 ] {} <Information> DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal.
2024.03.20 11:56:59.454114 [ 14 ] {} <Information> DatabaseAtomic (default): Metadata processed, database default has 0 tables and 0 dictionaries in total.
2024.03.20 11:56:59.454131 [ 14 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 4.0836e-05 sec
2024.03.20 11:56:59.454173 [ 14 ] {} <Information> loadMetadata: Start synchronous loading of databases
2024.03.20 11:56:59.454854 [ 14 ] {} <Information> UserDefinedSQLObjectsLoaderFromDisk: Loading user defined objects from /var/lib/clickhouse/user_defined/
2024.03.20 11:56:59.455045 [ 14 ] {} <Information> Application: Tasks stats provider: procfs
2024.03.20 11:56:59.455069 [ 14 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2024.03.20 11:56:59.455686 [ 14 ] {} <Warning> Application: Listen [::]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.456001 [ 14 ] {} <Warning> Application: Listen [::]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.563564 [ 14 ] {} <Warning> Application: Listen [0.0.0.0]:8123 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:8123 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.563742 [ 14 ] {} <Warning> Application: Listen [0.0.0.0]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.563819 [ 14 ] {} <Warning> Application: Listen [0.0.0.0]:9000 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9000 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.563908 [ 14 ] {} <Warning> Application: Listen [0.0.0.0]:0 failed: Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: Configuration error: no certificate file has been specified (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.563979 [ 14 ] {} <Warning> Application: Listen [0.0.0.0]:9004 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9004 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.564056 [ 14 ] {} <Warning> Application: Listen [0.0.0.0]:9005 failed: Poco::Exception. Code: 1000, e.code() = 98, Net Exception: Address already in use: 0.0.0.0:9005 (version 23.12.4.15 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2024.03.20 11:56:59.564080 [ 14 ] {} <Information> CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again.
2024.03.20 11:56:59.584268 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:56:59.584613 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:56:59.584676 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: minio-api-nodeport.vsmaps:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:56:59.584729 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:56:59.584791 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:56:59.586920 [ 14 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:56:59.587013 [ 14 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:56:59.587036 [ 14 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:56:59.587061 [ 14 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:56:59.589802 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:56:59.589919 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:56:59.589957 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:56:59.590014 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:56:59.590055 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:56:59.591401 [ 14 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:56:59.599001 [ 14 ] {} <Information> Application: Shutting down storages.
2024.03.20 11:56:59.607804 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:56:59.607991 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:56:59.608059 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:56:59.608149 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:56:59.608246 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:56:59.610659 [ 652 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:56:59.610966 [ 652 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:56:59.611038 [ 652 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:56:59.611075 [ 652 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:56:59.613307 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:56:59.613418 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:56:59.613438 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:56:59.613453 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:56:59.613566 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:56:59.613867 [ 652 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:56:59.615199 [ 652 ] {} <Error> void DB::SystemLog<DB::TraceLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::TraceLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:57:00.468755 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:57:00.468949 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:57:00.468985 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:57:00.469006 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:57:00.469032 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:57:00.470203 [ 650 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:57:00.470303 [ 650 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:57:00.470329 [ 650 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:57:00.470371 [ 650 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:57:00.471320 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:57:00.471454 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:57:00.471475 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:57:00.471489 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:57:00.471519 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:57:00.471841 [ 650 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:57:00.474308 [ 650 ] {} <Error> void DB::SystemLog<DB::MetricLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::MetricLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:57:00.490921 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:57:00.498473 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:57:00.498520 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:57:00.498562 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:57:00.498643 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_remove_objects_capability_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:57:00.500525 [ 653 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:57:00.500614 [ 653 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:57:00.500637 [ 653 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:57:00.500669 [ 653 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:57:00.502075 [ 663 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.03.20 11:57:00.502184 [ 663 ] {} <Information> AWSClient: AWSErrorMarshaller: Unable to generate a proper httpResponse from the response stream.   Response code: 400
2024.03.20 11:57:00.502203 [ 663 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: 10.96.1.225:9000
Request ID: 
Exception name: 
Error message: 
2 response headers:
connection : close
content-type : text/plain; charset=utf-8
2024.03.20 11:57:00.502225 [ 663 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.03.20 11:57:00.502253 [ 663 ] {} <Error> WriteBufferFromS3: S3Exception name , Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4
2024.03.20 11:57:00.502727 [ 653 ] {} <Error> virtual void DB::IDisk::checkAccessImpl(const String &): Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:57:00.504225 [ 653 ] {} <Error> void DB::SystemLog<DB::BlobStorageLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::BlobStorageLogElement]: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:57:00.649321 [ 14 ] {} <Information> Application: Closed all listening sockets.
2024.03.20 11:57:00.649416 [ 14 ] {} <Information> Application: Closed connections to servers for tables.
2024.03.20 11:57:00.669082 [ 14 ] {} <Information> Application: Waiting for background threads
2024.03.20 11:57:00.769628 [ 14 ] {} <Information> Application: Background threads finished in 100 ms
2024.03.20 11:57:00.779501 [ 14 ] {} <Error> Application: Code: 499. DB::Exception: Message: , bucket archives, key /clickhouse_access_check_754b5b2b-8845-485c-9478-756ec772a65c, object size 4: While checking access for disk archival_disk. (S3_ERROR), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d1c3b in /usr/bin/clickhouse
1. DB::S3Exception::S3Exception<String const&, String const&, String const&, unsigned long&>(Aws::S3::S3Errors, fmt::v8::basic_format_string<char, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<String const&>::type, fmt::v8::type_identity<unsigned long&>::type>, String const&, String const&, String const&, unsigned long&) @ 0x0000000010309fb7 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::makeSinglepartUpload(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031d837 in /usr/bin/clickhouse
3. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001031f796 in /usr/bin/clickhouse
4. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x00000000102f691a in /usr/bin/clickhouse
5. std::packaged_task<void ()>::operator()() @ 0x000000000fb089b4 in /usr/bin/clickhouse
6. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b9944 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bd19c in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7bbf98 in /usr/bin/clickhouse
9. ? @ 0x00007fb0bdbbd609 in ?
10. ? @ 0x00007fb0bdae2353 in ?
 (version 23.12.4.15 (official build))
2024.03.20 11:57:00.779996 [ 14 ] {} <Information> Application: shutting down
2024.03.20 11:57:00.789074 [ 15 ] {} <Information> BaseDaemon: Stop SignalListener thread
2024.03.20 11:57:00.839162 [ 13 ] {} <Information> Application: Child process exited normally with code 243.
/var/lib/clickhouse/preprocessed_configs/config.xml-        </node>
/var/lib/clickhouse/preprocessed_configs/config.xml-    </zookeeper>
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    <!-- Listen wildcard address to allow accepting connections from other containers and host network. -->
/var/lib/clickhouse/preprocessed_configs/config.xml-    <listen_host>::</listen_host>
/var/lib/clickhouse/preprocessed_configs/config.xml-    <listen_host>0.0.0.0</listen_host>
/var/lib/clickhouse/preprocessed_configs/config.xml-    <listen_try>1</listen_try>
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-   <storage_configuration>
/var/lib/clickhouse/preprocessed_configs/config.xml-      <disks>
/var/lib/clickhouse/preprocessed_configs/config.xml:         <archival_disk>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <type>s3_plain</type>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <endpoint>http://minio-api-nodeport.vsmaps:9000/archives/</endpoint>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <access_key_id>minio
/var/lib/clickhouse/preprocessed_configs/config.xml-</access_key_id>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <secret_access_key>minio123
/var/lib/clickhouse/preprocessed_configs/config.xml-</secret_access_key>
/var/lib/clickhouse/preprocessed_configs/config.xml:        </archival_disk>
/var/lib/clickhouse/preprocessed_configs/config.xml-      </disks>
/var/lib/clickhouse/preprocessed_configs/config.xml-      <policies>
/var/lib/clickhouse/preprocessed_configs/config.xml-         <archival_volume>
/var/lib/clickhouse/preprocessed_configs/config.xml-            <volumes>
/var/lib/clickhouse/preprocessed_configs/config.xml-              <main>
/var/lib/clickhouse/preprocessed_configs/config.xml:                <disk>archival_disk</disk>
/var/lib/clickhouse/preprocessed_configs/config.xml-              </main>
/var/lib/clickhouse/preprocessed_configs/config.xml-            </volumes>
/var/lib/clickhouse/preprocessed_configs/config.xml-         </archival_volume>
/var/lib/clickhouse/preprocessed_configs/config.xml-      </policies>
/var/lib/clickhouse/preprocessed_configs/config.xml-   </storage_configuration>
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-    <timezone>UTC</timezone>
/var/lib/clickhouse/preprocessed_configs/config.xml-    
/var/lib/clickhouse/preprocessed_configs/config.xml-
/var/lib/clickhouse/preprocessed_configs/config.xml-     <merge_tree>
/var/lib/clickhouse/preprocessed_configs/config.xml-         <max_suspicious_broken_parts>200</max_suspicious_broken_parts>
/var/lib/clickhouse/preprocessed_configs/config.xml-     </merge_tree>
/var/lib/clickhouse/preprocessed_configs/config.xml-</clickhouse>

Amansethi967 avatar Mar 20 '24 12:03 Amansethi967

looks like your environment variables contains "\r" or "\n"

change your secret data and remove trailing new line character

Slach avatar Mar 20 '24 15:03 Slach