talos icon indicating copy to clipboard operation
talos copied to clipboard

[Bug] Talos Ceph module not working for V2 Protocoll

Open suse-coder opened this issue 5 months ago • 3 comments
trafficstars

Bug Report

I use the external ceph cluster feature: https://rook.io/docs/rook/latest-release/CRDs/Cluster/external-cluster/external-cluster/ to connect my talos cluster to an external ceph.

It works all fine (rbd and cephfs) for this config:

sudo python3 create-external-cluster-resources.py \
  --namespace rook-ceph \
  --format bash \
  --rbd-data-pool-name        rbd-prod \
  --cephfs-filesystem-name    cephfs \
  --cephfs-metadata-pool-name cephfs-metadata \
  --cephfs-data-pool-name     cephfs-data \
  --subvolume-group           prod \
  --k8s-cluster-name          rook-ceph-prod \
  --restricted-auth-permission true \

  --ceph-conf  /tmp/ceph.conf \
  --keyring    /tmp/keyring \
  --output /tmp/rook-export-env-prod.sh

With that I can successfully create in talos rbd and cephfs pvc.

But when I add the --v2-port-enable \ to it (so that only encrypted communication is allowed) then I get the error (only for cephfs, rbd also works with v2):

Name:             cephfs-test-pod-2
Namespace:        default
Priority:         0
Service Account:  default
Node:             talos-mec-lba/192.168.178.79
Start Time:       Sun, 15 Jun 2025 10:46:21 +0000
Labels:           <none>
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
  test-container:
    Container ID:  
    Image:         busybox
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
    Args:
      -c
      echo "Test content for CephFS" > /mnt/cephfs/test.txt
      echo "CephFS file created. Sleeping..."
      sleep 3600
      
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/cephfs from cephfs-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnqt4 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  cephfs-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cephfs-test-pvc-2
    ReadOnly:   false
  kube-api-access-fnqt4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age    From                     Message
  ----     ------                  ----   ----                     -------
  Warning  FailedScheduling        5m29s  default-scheduler        0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Warning  FailedScheduling        4m43s  default-scheduler        0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Normal   Scheduled               4m40s  default-scheduler        Successfully assigned default/cephfs-test-pod-2 to talos-mec-lba
  Normal   SuccessfulAttachVolume  4m40s  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-2e8d289b-c711-4242-a14e-b76df01d498b"
  Warning  FailedMount             3m32s  kubelet                  MountVolume.MountDevice failed for volume "pvc-2e8d289b-c711-4242-a14e-b76df01d498b" : rpc error: code = Internal desc = an error (exit status 32) occurred while running mount args: [-t ceph csi-cephfs-node-rook-ceph-prod-cephfs@91a5d5fc-7d2b-470e-804d-9adb5361f026.cephfs=/volumes/prod/csi-vol-19995dd5-1454-4af7-88fb-1eced843ff31/bb3d5cb2-bb0a-4c6f-8ab3-1c46084f7263 /var/lib/kubelet/plugins/kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com/44ea5b275131d823b3c80c7e081c4e37d95ab5c94c2f8e24dda2d3d546893a9f/globalmount -o mon_addr=192.168.178.80:3300,secretfile=/tmp/csi/keys/keyfile-1725904895,_netdev] stderr: modprobe: FATAL: Module ceph not found in directory /lib/modules/6.12.28-talos
mount error: no mds (Metadata Server) is up. The cluster might be laggy, or you may not be authorized
  Warning  FailedMount  2m30s  kubelet  MountVolume.MountDevice failed for volume "pvc-2e8d289b-c711-4242-a14e-b76df01d498b" : rpc error: code = Internal desc = an error (exit status 32) occurred while running mount args: [-t ceph csi-cephfs-node-rook-ceph-prod-cephfs@91a5d5fc-7d2b-470e-804d-9adb5361f026.cephfs=/volumes/prod/csi-vol-19995dd5-1454-4af7-88fb-1eced843ff31/bb3d5cb2-bb0a-4c6f-8ab3-1c46084f7263 /var/lib/kubelet/plugins/kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com/44ea5b275131d823b3c80c7e081c4e37d95ab5c94c2f8e24dda2d3d546893a9f/globalmount -o mon_addr=192.168.178.80:3300,secretfile=/tmp/csi/keys/keyfile-916872719,_netdev] stderr: modprobe: FATAL: Module ceph not found in directory /lib/modules/6.12.28-talos
mount error: no mds (Metadata Server) is up. The cluster might be laggy, or you may not be authorized
  Warning  FailedMount  89s  kubelet  MountVolume.MountDevice failed for volume "pvc-2e8d289b-c711-4242-a14e-b76df01d498b" : rpc error: code = Internal desc = an error (exit status 32) occurred while running mount args: [-t ceph csi-cephfs-node-rook-ceph-prod-cephfs@91a5d5fc-7d2b-470e-804d-9adb5361f026.cephfs=/volumes/prod/csi-vol-19995dd5-1454-4af7-88fb-1eced843ff31/bb3d5cb2-bb0a-4c6f-8ab3-1c46084f7263 /var/lib/kubelet/plugins/kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com/44ea5b275131d823b3c80c7e081c4e37d95ab5c94c2f8e24dda2d3d546893a9f/globalmount -o mon_addr=192.168.178.80:3300,secretfile=/tmp/csi/keys/keyfile-1898055593,_netdev] stderr: modprobe: FATAL: Module ceph not found in directory /lib/modules/6.12.28-talos
mount error: no mds (Metadata Server) is up. The cluster might be laggy, or you may not be authorized
  Warning  FailedMount  25s  kubelet  MountVolume.MountDevice failed for volume "pvc-2e8d289b-c711-4242-a14e-b76df01d498b" : rpc error: code = Internal desc = an error (exit status 32) occurred while running mount args: [-t ceph csi-cephfs-node-rook-ceph-prod-cephfs@91a5d5fc-7d2b-470e-804d-9adb5361f026.cephfs=/volumes/prod/csi-vol-19995dd5-1454-4af7-88fb-1eced843ff31/bb3d5cb2-bb0a-4c6f-8ab3-1c46084f7263 /var/lib/kubelet/plugins/kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com/44ea5b275131d823b3c80c7e081c4e37d95ab5c94c2f8e24dda2d3d546893a9f/globalmount -o mon_addr=192.168.178.80:3300,secretfile=/tmp/csi/keys/keyfile-2722618729,_netdev] stderr: modprobe: FATAL: Module ceph not found in directory /lib/modules/6.12.28-talos
mount error: no mds (Metadata Server) is up. The cluster might be laggy, or you may not be authorized

So the provisioning and attachement in cephfs for the pvc to the pod was succcessfull, but the mounting not: FATAL: Module ceph not found in directory ...

The mds server is up:

sudo microceph.ceph mds metadata t-Virtual-Machine 
{
    "addr": "[v2:192.168.178.80:6800/1780088979,v1:192.168.178.80:6801/1780088979]",
    "arch": "x86_64",
...

sudo microceph.ceph mds stat
cephfs:1 {0=t-Virtual-Machine=up:active}

and connection works:
nc -zv 192.168.178.80 6800
Connection to 192.168.178.80 6800 port [tcp/*] succeeded!
t@t-Virtual-Machine:~$ nc -zv 192.168.178.80 6801
Connection to 192.168.178.80 6801 port [tcp/*] succeeded!

rbd, no encryption: provisioning and mounting works in talos cephfs, no encryption: provisioning and mounting works in talos rbd, encryption V2: provisioning and mounting works in talos cephfs, encryption V2: provisioning and mounting does not works in talos

Why is only cephfs with V2 encryption not mounting in talos. The cephfs module is there. Is a newer one needed?

Description

Logs

generated output from python for ceph cluster

export ARGS="[Configurations]
ceph-conf = /tmp/ceph.conf
keyring = /tmp/keyring
k8s-cluster-name = rook-ceph-prod
namespace = rook-ceph
rgw-pool-prefix = default
restricted-auth-permission = true
v2-port-enable = True
format = bash
output = /tmp/rook-export-env-prod.sh
cephfs-filesystem-name = cephfs
cephfs-metadata-pool-name = cephfs-metadata
cephfs-data-pool-name = cephfs-data
rbd-data-pool-name = rbd-prod
subvolume-group = prod
"
export NAMESPACE=rook-ceph
export ROOK_EXTERNAL_FSID=91a5d5fc-7d2b-470e-804d-9adb5361f026
export ROOK_EXTERNAL_USERNAME=client.healthchecker
export ROOK_EXTERNAL_CEPH_MON_DATA=t-Virtual-Machine=192.168.178.80:3300
export ROOK_EXTERNAL_USER_SECRET=AQDUoU5oGncLDxAAZn8EJMc81Fzl2Ubj60OjbQ==
export ROOK_EXTERNAL_DASHBOARD_LINK=http://192.168.178.80:8080/
export CSI_RBD_NODE_SECRET=AQDUoU5ot8dYDxAAnZ27jvkAZhjicMzTu9Cmeg==
export CSI_RBD_NODE_SECRET_NAME=csi-rbd-node-rook-ceph-prod-rbd-prod
export CSI_RBD_PROVISIONER_SECRET=AQDUoU5oycSTDxAAF5mDoaUoXWTERJD58s8llQ==
export CSI_RBD_PROVISIONER_SECRET_NAME=csi-rbd-provisioner-rook-ceph-prod-rbd-prod
export CEPHFS_POOL_NAME=cephfs-data
export CEPHFS_METADATA_POOL_NAME=cephfs-metadata
export CEPHFS_FS_NAME=cephfs
export RESTRICTED_AUTH_PERMISSION=true
export SUBVOLUME_GROUP=prod
export CSI_CEPHFS_NODE_SECRET=AQDUoU5otFPTDxAAsKrY5FYz+vroxzQLDJ618A==
export CSI_CEPHFS_PROVISIONER_SECRET=AQDUoU5oL3waEBAAXxiV67vWPr8OACiBsIMNmQ==
export CSI_CEPHFS_NODE_SECRET_NAME=csi-cephfs-node-rook-ceph-prod-cephfs
export CSI_CEPHFS_PROVISIONER_SECRET_NAME=csi-cephfs-provisioner-rook-ceph-prod-cephfs
export MONITORING_ENDPOINT=192.168.178.80
export MONITORING_ENDPOINT_PORT=9283
export RBD_POOL_NAME=rbd-prod
export RGW_POOL_PREFIX=default
 talosctl -n 192.168.178.79  dmesg | grep ceph
192.168.178.79: kern:    info: [2025-06-15T12:18:34.170859811Z]: ceph: loaded (mds proto 32)
192.168.178.79: kern:  notice: [2025-06-15T12:18:34.232279811Z]: Key type ceph registered
192.168.178.79: kern:    info: [2025-06-15T12:18:34.232442811Z]: libceph: loaded (mon/osd proto 15/24)
192.168.178.79: kern: warning: [2025-06-15T12:23:47.074119811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:47.325206811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:47.833692811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:48.857068811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:50.777636811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:50.937209811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:51.189086811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:51.705268811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:52.729209811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:54.010338811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:54.261401811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:54.778412811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:55.801908811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:56.825259811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:57.076962811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:57.592998811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern:    info: [2025-06-15T12:23:57.603788811Z]: libceph: mon0 (1)192.168.178.80:6789 session established
192.168.178.79: kern:    info: [2025-06-15T12:23:57.604638811Z]: libceph: client106651 fsid 91a5d5fc-7d2b-470e-804d-9adb5361f026
192.168.178.79: kern: warning: [2025-06-15T12:23:58.617147811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:23:59.897196811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:00.148973811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:00.665144811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:01.690414811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:02.973120811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:03.226784811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:03.737753811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:04.761208811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:06.042450811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:06.293116811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:06.809201811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:07.833164811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:09.114716811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:09.364850811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:09.881546811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:10.904883811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:12.793151811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:12.953219811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:13.205219811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:13.725028811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:14.744857811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:16.027489811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:16.277275811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:16.793112811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:17.816837811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:18.841384811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:19.093437811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:19.609239811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:20.633082811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:21.916871811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:22.169509811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:22.682587811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:23.705131811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:24.985876811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:25.236703811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:25.753064811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:26.777014811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:27.801803811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:28.052838811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:28.569294811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:29.592803811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:30.872895811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:31.125052811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:31.640655811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:32.665988811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:33.945040811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:34.197598811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:34.713108811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:35.737542811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:37.021276811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:37.273099811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:37.784784811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:38.809181811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:39.833240811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:40.084715811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:40.600892811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:41.624684811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:42.909027811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:43.161105811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:43.673272811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:44.699270811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:45.976706811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:46.228821811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern: warning: [2025-06-15T12:24:46.744705811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)
192.168.178.79: kern:    info: [2025-06-15T12:24:47.767958811Z]: ceph: No mds server is up or the cluster is laggy

this is the V2 Port, but what does this warning mean? 192.168.178.79: kern: warning: [2025-06-15T12:24:09.364850811Z]: libceph: mon0 (1)192.168.178.80:3300 socket closed (con state V1_BANNER)

Environment

talosctl version Client: Tag: v1.10.2 SHA: 1cf5914b Built:
Go version: go1.24.3 OS/Arch: linux/amd64 Server: NODE: 192.168.178.77 Tag: v1.10.3 SHA: dde2cebc Built:
Go version: go1.24.3 OS/Arch: linux/amd64 Enabled: RBAC

suse-coder avatar Jun 15 '25 10:06 suse-coder