/usr/bin/ceph: stderr Error EIO: Module 'cephadm' has experienced an error and cannot handle commands: ContainerInspectInfo
rwrite Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit chronyd.service is enabled and running Repeating the final host check... docker (/usr/bin/docker) is present systemctl is present lvcreate is present Unit chronyd.service is enabled and running Host looks OK Cluster fsid: 1f001bd8-37e8-11ee-841f-bad9635aa8a4 Verifying IP 192.168.2.129 port 3300 ... Verifying IP 192.168.2.129 port 6789 ... Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network Ceph version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable) Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Wrote config to /etc/ceph/ceph.conf Wrote keyring to /etc/ceph/ceph.client.admin.keyring Creating mgr... Verifying port 9283 ... Verifying port 8765 ... Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/15)... mgr not available, waiting (2/15)... mgr not available, waiting (3/15)... mgr not available, waiting (4/15)... mgr not available, waiting (5/15)... mgr is available Enabling cephadm module... Waiting for the mgr to restart... Waiting for mgr epoch 5... mgr epoch 5 is available Setting orchestrator backend to cephadm... Generating ssh key... Wrote public SSH key to /etc/ceph/ceph.pub Adding key to root@localhost authorized_keys... Adding host localhost.localdomain... Deploying mon service with default placement... Deploying mgr service with default placement... Deploying crash service with default placement... Enabling the dashboard module... Waiting for the mgr to restart... Waiting for mgr epoch 9... mgr epoch 9 is available Generating a dashboard self-signed certificate... Creating initial admin user... Fetching dashboard port number... Ceph Dashboard is now available at:
URL: https://localhost.localdomain:8443/
User: admin
Password: aidpjhmh2o
Enabling client.admin keyring and conf on hosts with "admin" label Non-zero exit code 5 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=ceph/daemon-base:main-reef-centos-stream8-x86_64 -e NODE_NAME=localhost.localdomain -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/1f001bd8-37e8-11ee-841f-bad9635aa8a4:/var/log/ceph:z -v /tmp/ceph-tmpqtaiu7g3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp47upkre8:/etc/ceph/ceph.conf:z ceph/daemon-base:main-reef-centos-stream8-x86_64 orch client-keyring set client.admin label:_admin /usr/bin/ceph: stderr Error EIO: Module 'cephadm' has experienced an error and cannot handle commands: ContainerInspectInfo(image_id='2654cc30c2220de62159577f557332ebefd82acedaca04c5bdc04866e056d071', ceph_version='ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)', repo_digests=['']) Unable to set up "admin" label; assuming older version of Ceph Saving cluster configuration to /var/lib/ceph/1f001bd8-37e8-11ee-841f-bad9635aa8a4/config directory Enabling autotune for osd_memory_target You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo cephadm shell --fsid 1f001bd8-37e8-11ee-841f-bad9635aa8a4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/en/latest/mgr/telemetry/
Bootstrap complete.
As long as I modify the image or remake it, the image ID will report this error,about reef