ceph-nvmeof
ceph-nvmeof copied to clipboard
Failure adding namespace 'Operation not permitted'
I am going through my first ceph nvmeof testing and am running into an issue where I cannot create namespaces. It seems like the RBD image i created is not able to be seen by the nvmeof container given the errors but i'm not sure why. Any guidance would be much appreciated!
I have created an RBD image manually using
rbd create --size 2048 rbd/disk01
I can view this RBD to verify it exists:
[root@nor1devlcph01 ceph-nvmeof]# rbd info rbd/disk01
rbd image 'disk01':
size 2 GiB in 512 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 2a61b0187d9333
block_name_prefix: rbd_data.2a61b0187d9333
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Fri Feb 23 11:00:10 2024
access_timestamp: Fri Feb 23 11:00:10 2024
modify_timestamp: Fri Feb 23 11:00:10 2024
When I try to create a namespace using: cephnvmf namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
I get this error on my CLI:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp44046 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.3 --server-port 5500 namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_c7a8957c-4d4c-4ef2-99ea-5392c76c8870: Operation not permitted
exit code: 1
And I get this in the nvmeof container:
INFO:nvmeof:Received request to add a namespace to nqn.2016-06.io.spdk:nor1devlcph01, context: <grpc._server._Context object at 0x7f6cb06978b0>
INFO:nvmeof:Received request to create bdev bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5 from rbd/disk01 (size 0 MiB) with block size 512, will not create image if doesn't exist
[2024-02-23 16:57:03.185249] bdev_rbd.c: 299:bdev_rbd_init_context: *ERROR*: Failed to open specified rbd device
[2024-02-23 16:57:03.185665] bdev_rbd.c: 335:bdev_rbd_init: *ERROR*: Cannot init rbd context for rbd=0x1f1ad20
[2024-02-23 16:57:03.185708] bdev_rbd.c:1170:bdev_rbd_create: *ERROR*: Failed to init rbd device
ERROR:nvmeof:bdev_rbd_create bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5 failed with:
request:
{
"pool_name": "rbd",
"rbd_name": "disk01",
"block_size": 512,
"name": "bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5",
"cluster_name": "cluster_context_0",
"uuid": "60ae41b4-85b3-43b6-9991-df30ad9366f5",
"method": "bdev_rbd_create",
"req_id": 18
}
Got JSON-RPC error response
response:
{
"code": -1,
"message": "Operation not permitted"
}
ERROR:nvmeof:Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5: Operation not permitted
[2024-02-23 16:57:03.189787] bdev.c:7158:spdk_bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5
[2024-02-23 16:57:03.189825] bdev_rpc.c: 866:rpc_bdev_get_bdevs: *ERROR*: bdev 'bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5' does not exist
ERROR:nvmeof:Got exception while getting bdev bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5 info
Traceback (most recent call last):
File "/src/control/grpc.py", line 892, in get_bdev_info
bdevs = rpc_bdev.bdev_get_bdevs(self.spdk_rpc_client, name=bdev_name)
File "/usr/lib/python3.9/site-packages/spdk/rpc/bdev.py", line 1553, in bdev_get_bdevs
return client.call('bdev_get_bdevs', params)
File "/usr/lib/python3.9/site-packages/spdk/rpc/client.py", line 203, in call
raise JSONRPCException(msg)
spdk.rpc.client.JSONRPCException: request:
{
"name": "bdev_60ae41b4-85b3-43b6-9991-df30ad9366f5",
"method": "bdev_get_bdevs",
"req_id": 19
}
Got JSON-RPC error response
response:
{
"code": -19,
"message": "No such device"
}
I think you are missing a step that requires to do this: rbd pool init NVME-OF_POOL_NAME
We might need to add it to the readme.. let me know.
I had hoped it was that simple but maybe not.... I'm not sure if there's a way to verify a pool was init
ed already or not so I just re-ran the command and got no output so i'm assuming it either was already init'd or did the init successfully. Still having the same problem creating the namespace.
[root@nor1devlcph01 ceph-nvmeof]# rbd pool init rbd
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf subsystem list
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp8662 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.4 --server-port 5500 subsystem list
Subsystems:
╒═══════════╤═══════════════════════════════════╤════════════╤════════════════════╤══════════════════════╤══════════════════╤═════════════╕
│ Subtype │ NQN │ HA State │ Serial Number │ Model Number │ Controller IDs │ Namespace │
│ │ │ │ │ │ │ Count │
╞═══════════╪═══════════════════════════════════╪════════════╪════════════════════╪══════════════════════╪══════════════════╪═════════════╡
│ NVMe │ nqn.2016-06.io.spdk:nor1devlcph01 │ disabled │ SPDK49375303171247 │ SPDK bdev Controller │ 1-65519 │ 0 │
╘═══════════╧═══════════════════════════════════╧════════════╧════════════════════╧══════════════════════╧══════════════════╧═════════════╛
exit code: 0
And then trying to create the namespace gives:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp62368 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.4 --server-port 5500 namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21: Operation not permitted
exit code: 1
Output from the nvmeof container when i try to create the namespace:
INFO:nvmeof:Received request to add a namespace to nqn.2016-06.io.spdk:nor1devlcph01, context: <grpc._server._Context object at 0x7f9e4066b070>
INFO:nvmeof:Received request to create bdev bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21 from rbd/disk01 (size 0 MiB) with block size 512, will not create image if doesn't exist
[2024-02-25 14:44:04.811325] bdev_rbd.c: 299:bdev_rbd_init_context: *ERROR*: Failed to open specified rbd device
[2024-02-25 14:44:04.811982] bdev_rbd.c: 335:bdev_rbd_init: *ERROR*: Cannot init rbd context for rbd=0x2750980
[2024-02-25 14:44:04.812032] bdev_rbd.c:1170:bdev_rbd_create: *ERROR*: Failed to init rbd device
ERROR:nvmeof:bdev_rbd_create bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21 failed with:
request:
{
"pool_name": "rbd",
"rbd_name": "disk01",
"block_size": 512,
"name": "bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21",
"cluster_name": "cluster_context_0",
"uuid": "3a1df8b8-3d8c-4ab8-9457-cf9791338f21",
"method": "bdev_rbd_create",
"req_id": 10
}
Got JSON-RPC error response
response:
{
"code": -1,
"message": "Operation not permitted"
}
ERROR:nvmeof:Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21: Operation not permitted
[2024-02-25 14:44:04.816092] bdev.c:7158:spdk_bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21
[2024-02-25 14:44:04.816126] bdev_rpc.c: 866:rpc_bdev_get_bdevs: *ERROR*: bdev 'bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21' does not exist
ERROR:nvmeof:Got exception while getting bdev bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21 info
Traceback (most recent call last):
File "/src/control/grpc.py", line 892, in get_bdev_info
bdevs = rpc_bdev.bdev_get_bdevs(self.spdk_rpc_client, name=bdev_name)
File "/usr/lib/python3.9/site-packages/spdk/rpc/bdev.py", line 1553, in bdev_get_bdevs
return client.call('bdev_get_bdevs', params)
File "/usr/lib/python3.9/site-packages/spdk/rpc/client.py", line 203, in call
raise JSONRPCException(msg)
spdk.rpc.client.JSONRPCException: request:
{
"name": "bdev_3a1df8b8-3d8c-4ab8-9457-cf9791338f21",
"method": "bdev_get_bdevs",
"req_id": 11
}
Got JSON-RPC error response
response:
{
"code": -19,
"message": "No such device"
}
It should be simple. Will find the issue. @gbregman can you check. Maybe we need to know the sha of the CLI and GW being used here?
@gsperry2011 can you send us the git commit hash values for the NVMeOF and SPDK? You can find them in the nvmeof log. Look for "NVMeoF gateway Git commit" and "SPDK Git commit". Thanks.
@gsperry2011 also, can you send the contents of the configuration file? You can see it the nvmeof log. Look for the text between:
====================================== Configuration file content ======================================
and:
========================================================================================================
- NVMEoF
2024-02-25 14:37:18] INFO grpc.py:75: NVMeoF gateway Git repository: https://github.com/ceph/ceph-nvmeof
[2024-02-25 14:37:18] INFO grpc.py:78: NVMeoF gateway Git branch: tags/1.0.0
[2024-02-25 14:37:18] INFO grpc.py:81: NVMeoF gateway Git commit: d08860d3a1db890b2c3ec9c8da631f1ded3b61b6
- SPDK:
[2024-02-25 14:37:18] INFO grpc.py:87: SPDK Git repository: https://github.com/ceph/spdk.git
[2024-02-25 14:37:18] INFO grpc.py:90: SPDK Git branch: undefined
[2024-02-25 14:37:18] INFO grpc.py:93: SPDK Git commit: 668268f74ea147f3343b9f8136df3e6fcc61f4cf
- Here is the config file from the logs. I don't think I made any changes to it so I beleive it's straight from the git repo at the moment:
[2024-02-25 14:37:17] INFO config.py:57: Using configuration file /src/ceph-nvmeof.conf
[2024-02-25 14:37:17] INFO config.py:59: ====================================== Configuration file content ======================================
[2024-02-25 14:37:17] INFO config.py:63: #
[2024-02-25 14:37:17] INFO config.py:63: # Copyright (c) 2021 International Business Machines
[2024-02-25 14:37:17] INFO config.py:63: # All rights reserved.
[2024-02-25 14:37:17] INFO config.py:63: #
[2024-02-25 14:37:17] INFO config.py:63: # SPDX-License-Identifier: LGPL-3.0-or-later
[2024-02-25 14:37:17] INFO config.py:63: #
[2024-02-25 14:37:17] INFO config.py:63: # Authors: [email protected], [email protected]
[2024-02-25 14:37:17] INFO config.py:63: #
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: [gateway]
[2024-02-25 14:37:17] INFO config.py:63: name =
[2024-02-25 14:37:17] INFO config.py:63: group =
[2024-02-25 14:37:17] INFO config.py:63: addr = 0.0.0.0
[2024-02-25 14:37:17] INFO config.py:63: port = 5500
[2024-02-25 14:37:17] INFO config.py:63: enable_auth = False
[2024-02-25 14:37:17] INFO config.py:63: state_update_notify = True
[2024-02-25 14:37:17] INFO config.py:63: state_update_interval_sec = 5
[2024-02-25 14:37:17] INFO config.py:63: enable_spdk_discovery_controller = False
[2024-02-25 14:37:17] INFO config.py:63: #omap_file_lock_duration = 60
[2024-02-25 14:37:17] INFO config.py:63: #omap_file_lock_retries = 15
[2024-02-25 14:37:17] INFO config.py:63: #omap_file_lock_retry_sleep_interval = 5
[2024-02-25 14:37:17] INFO config.py:63: #omap_file_update_reloads = 10
[2024-02-25 14:37:17] INFO config.py:63: log_level=debug
[2024-02-25 14:37:17] INFO config.py:63: bdevs_per_cluster = 32
[2024-02-25 14:37:17] INFO config.py:63: #log_files_enabled = True
[2024-02-25 14:37:17] INFO config.py:63: #log_files_rotation_enabled = True
[2024-02-25 14:37:17] INFO config.py:63: #verbose_log_messages = True
[2024-02-25 14:37:17] INFO config.py:63: #max_log_file_size_in_mb=10
[2024-02-25 14:37:17] INFO config.py:63: #max_log_files_count=20
[2024-02-25 14:37:17] INFO config.py:63: #
[2024-02-25 14:37:17] INFO config.py:63: # Notice that if you change the log directory the log files will only be visible inside the container
[2024-02-25 14:37:17] INFO config.py:63: #
[2024-02-25 14:37:17] INFO config.py:63: #log_directory = /var/log/ceph/
[2024-02-25 14:37:17] INFO config.py:63: #enable_prometheus_exporter = True
[2024-02-25 14:37:17] INFO config.py:63: #prometheus_exporter_ssl = True
[2024-02-25 14:37:17] INFO config.py:63: #prometheus_port = 10008
[2024-02-25 14:37:17] INFO config.py:63: #prometheus_bdev_pools = rbd
[2024-02-25 14:37:17] INFO config.py:63: #prometheus_stats_interval = 10
[2024-02-25 14:37:17] INFO config.py:63: #verify_nqns = True
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: [discovery]
[2024-02-25 14:37:17] INFO config.py:63: addr = 0.0.0.0
[2024-02-25 14:37:17] INFO config.py:63: port = 8009
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: [ceph]
[2024-02-25 14:37:17] INFO config.py:63: pool = rbd
[2024-02-25 14:37:17] INFO config.py:63: config_file = /etc/ceph/ceph.conf
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: [mtls]
[2024-02-25 14:37:17] INFO config.py:63: server_key = ./server.key
[2024-02-25 14:37:17] INFO config.py:63: client_key = ./client.key
[2024-02-25 14:37:17] INFO config.py:63: server_cert = ./server.crt
[2024-02-25 14:37:17] INFO config.py:63: client_cert = ./client.crt
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: [spdk]
[2024-02-25 14:37:17] INFO config.py:63: tgt_path = /usr/local/bin/nvmf_tgt
[2024-02-25 14:37:17] INFO config.py:63: #rpc_socket_dir = /var/tmp/
[2024-02-25 14:37:17] INFO config.py:63: #rpc_socket_name = spdk.sock
[2024-02-25 14:37:17] INFO config.py:63: #tgt_cmd_extra_args = --env-context="--no-huge -m1024" --iova-mode=va
[2024-02-25 14:37:17] INFO config.py:63: timeout = 60.0
[2024-02-25 14:37:17] INFO config.py:63: log_level = WARN
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: # Example value: -m 0x3 -L all
[2024-02-25 14:37:17] INFO config.py:63: # tgt_cmd_extra_args =
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: # transports = tcp
[2024-02-25 14:37:17] INFO config.py:63:
[2024-02-25 14:37:17] INFO config.py:63: # Example value: {"max_queue_depth" : 16, "max_io_size" : 4194304, "io_unit_size" : 1048576, "zcopy" : false}
[2024-02-25 14:37:17] INFO config.py:63: transport_tcp_options = {"in_capsule_data_size" : 8192, "max_io_qpairs_per_ctrlr" : 7}
[2024-02-25 14:37:17] INFO config.py:64: ========================================================================================================
Here is my ceph version:
[root@nor1devlcph01 greg.perry]# ceph -v
ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
@gsperry2011 can you describe how you deployed the GW? Did you use cephadm? Or how?
Sure thing. For now I am very early on in testing and am following the steps as they are listed in the README. FWIW I haven't seen any cephadm
deployment guide so I assume that is up to me to figure out and will be what i hope to use for my production deployment. I'll admit I'm still fairly new to ceph so perhaps this is something that comes easily to someone more experienced. My current ceph deployment was done using cephadm
and is how i admin my cluster.
I ran into a couple issues with the steps in the README, all of them related to the make
file attemping to call podman compose
when my binary is podman-compose
so I was unable to use the make
commands and ran them manually so perhaps that is where the issue is.
Here are my steps:
- make sure SE linux is in permissive
- add docker repo:
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
- we need docker-engine and docker-compose. I already have
podman
running so just neededpodman-compose
yum install -y podman-compose # not listed as a req but i found running 'make pull' below to try using it yum install -y jq
- clone code
git clone https://github.com/ceph/ceph-nvmeof.git cd ceph-nvmeof git submodule update --init --recursive
- enable huge pages:
make setup
- download container images
make pull # above cmd had issues trying to run 'podman compose' when we need 'podman-compose' so I ran this manually to DL images: podman-compose pull ceph spdk bdevperf nvmeof nvmeof-devel nvmeof-cli discovery
- start container: NOTE: this should spawn 2 containers. one should have 192.168.13.2 and the nvmeof container will be 192.168.13.3
make up # this also failed due to 'podman compose' not being available but we do have 'podman-compose' so i ran manually: podman-compose up ceph --abort-on-container-exit --exit-code-from ceph --remove-orphans podman-compose up nvmeof --abort-on-container-exit --exit-code-from nvmeof --remove-orphans
- create 'nvmeof' shortcut / alias so we can run stuff on the CLI to configure the nvmeof gateway
alias cephnvmf="/bin/podman-compose run --rm nvmeof-cli --server-address 192.168.13.3 --server-port 5500";
- your pod might come up with a diff IP, check its IP by doing:
# get the container ID first podman ps # then inspect its settings and grep for ip podman inspect <nvmeof pod ID> | grep -i ipadd
- your pod might come up with a diff IP, check its IP by doing:
- i manually created a 2GB RBD in the pool 'rbd' for testing
rbd create --size 2048 rbd/disk01 rbd info disk01
- create a subsystem:
cephnvmf subsystem add --subsystem nqn.2016-06.io.spdk:nor1devlcph01
- add a namespace
cephnvmf namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
@caroav @gbregman I want to confirm you're not waiting from more info from me on this issue?
@gsperry2011 sorry for the very delayed response. Can you run: rbd pool init rbd
And retry creating the namespace?
@caroav That was the first thing recommended for me to do and the output is above. My second post in this thread. https://github.com/ceph/ceph-nvmeof/issues/460#issuecomment-1962963421
@caroav I see that the Ceph version here is quincy, 17.*. We use a newer version. Did we try using older Ceph versions?
@gsperry2011 sorry for repeating the same comment. I reproduced exactly the error you describe and was able to make it work after doing that command. I'm not really sure what is going on in your setup. Can you upload here the entire log file of the gw since the moment is starts. It prints the configuration at the beginning, maybe this will help.
To sanity check I created a new pool nvmeoftest
, init'd it, then created a new RBD disk01
on the new test pool then tried creating the name space with this new pool & RBD and received the same error.
I then created another RBD disk99
on this new test pool and mounted it to a different ceph node to verify I could create a filesystem on it, mount it, and then write data to it successfully to try and verify RBD and pool stuff is working properly on my cluster.
My steps and output:
Showing that the nvmeoftest
pool does not exist currently:
[root@nor1devlcph01 greg.perry]# rados lspools
.mgr
.nfs
cephfs.cephfs01.meta
cephfs.cephfs01.data
rbd
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
default.rgw.buckets.index
default.rgw.buckets.data
hv-testing-pool01
Creating the pool, init'ing it, creating new RBD disk01
:
[root@nor1devlcph01 greg.perry]# ceph osd pool create nvmeoftest
pool 'nvmeoftest' created
[root@nor1devlcph01 greg.perry]# rbd pool init nvmeoftest
[root@nor1devlcph01 greg.perry]# rbd create --size 2048 nvmeoftest/disk01
[root@nor1devlcph01 greg.perry]# rbd info nvmeoftest/disk01
rbd image 'disk01':
size 2 GiB in 512 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 3a9a58ab60e72b
block_name_prefix: rbd_data.3a9a58ab60e72b
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Tue Mar 19 08:38:33 2024
access_timestamp: Tue Mar 19 08:38:33 2024
modify_timestamp: Tue Mar 19 08:38:33 2024
Verifying the subsystem I created in the past exists still:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf subsystem list
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp42614 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.4 --server-port 5500 subsystem list
Subsystems:
╒═══════════╤═══════════════════════════════════╤════════════╤════════════════════╤══════════════════════╤══════════════════╤═════════════╕
│ Subtype │ NQN │ HA State │ Serial Number │ Model Number │ Controller IDs │ Namespace │
│ │ │ │ │ │ │ Count │
╞═══════════╪═══════════════════════════════════╪════════════╪════════════════════╪══════════════════════╪══════════════════╪═════════════╡
│ NVMe │ nqn.2016-06.io.spdk:nor1devlcph01 │ disabled │ SPDK49375303171247 │ SPDK bdev Controller │ 1-65519 │ 0 │
╘═══════════╧═══════════════════════════════════╧════════════╧════════════════════╧══════════════════════╧══════════════════╧═════════════╛
exit code: 0
Trying to create the namespace:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool nvmeoftest --rbd-image disk01
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp10650 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.4 --server-port 5500 namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool nvmeoftest --rbd-image disk01
Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_ba35b054-fd91-4f59-a1a8-43ac304e11da: Operation not permitted
exit code: 1
On a separate ceph node I create a new RBD disk99
on this same nvmeoftest
pool and mount it, write data, etc.
[root@nor1devlcph03 greg.perry]# modprobe rbd
[root@nor1devlcph03 greg.perry]# rbd create disk99 --size 1024 --pool nvmeoftest
[root@nor1devlcph03 greg.perry]# rbd map disk99 --pool nvmeoftest --name client.admin
/dev/rbd0
[root@nor1devlcph03 greg.perry]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=32768 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
[root@nor1devlcph03 greg.perry]# ls /mnt
[root@nor1devlcph03 greg.perry]# mount /dev/rbd0 /mnt
[root@nor1devlcph03 greg.perry]# cd /mnt/
[root@nor1devlcph03 mnt]# ls
[root@nor1devlcph03 mnt]# touch test.txt
[root@nor1devlcph03 mnt]# echo "abc123" >> test.txt
[root@nor1devlcph03 mnt]# cat ./test.txt
abc123
So I believe my RBD stuff seems to be working fine? Let me know if there's additional tests I should run to help verify this.
To help keep things organized I will post my complete logs from my nvmeof containers in a separate post.
I will post the full logs at the bottom of this post and describe my steps to produce them.
- I stopped both of my running containers,
quay.io/ceph/vstart-cluster:18.2.1
andquay.io/ceph/nvmeof:1.0.0
. - I removed all
nvmeof-*
logs from/var/log/ceph
- I rebooted the system (the only system i've starting nvmeof containers on)
Start up sequence:
- Start
ceph
container:
podman-compose up ceph --abort-on-container-exit --exit-code-from ceph --remove-orphans
Once the ceph
container appears fully up, the output on the terminal stops and the last message I see is:
[ceph] | ceph dashboard nvmeof-gateway-add -i /dev/fd/63 nvmeof.1
- Start 'nvmeof' container:
podman-compose up nvmeof --abort-on-container-exit --exit-code-from nvmeof --remove-orphans
This container also appears to start successfully.
- Sanity check no subsystems exist:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf subsystem list
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp16938 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.3 --server-port 5500 subsystem list
No subsystems
exit code: 0
- Create a subsystem:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf subsystem add --subsystem nqn.2016-06.io.spdk:nor1devlcph01
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp27018 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.3 --server-port 5500 subsystem add --subsystem nqn.2016-06.io.spdk:nor1devlcph01
Adding subsystem nqn.2016-06.io.spdk:nor1devlcph01: Successful
exit code: 0
- Verify the subsystem got created:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf subsystem list
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp56083 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.3 --server-port 5500 subsystem list
Subsystems:
╒═══════════╤═══════════════════════════════════╤════════════╤════════════════════╤══════════════════════╤══════════════════╤═════════════╕
│ Subtype │ NQN │ HA State │ Serial Number │ Model Number │ Controller IDs │ Namespace │
│ │ │ │ │ │ │ Count │
╞═══════════╪═══════════════════════════════════╪════════════╪════════════════════╪══════════════════════╪══════════════════╪═════════════╡
│ NVMe │ nqn.2016-06.io.spdk:nor1devlcph01 │ disabled │ SPDK56572980226401 │ SPDK bdev Controller │ 1-65519 │ 0 │
╘═══════════╧═══════════════════════════════════╧════════════╧════════════════════╧══════════════════════╧══════════════════╧═════════════╛
exit code: 0
- Try to create namespaces and recieve the same
Operation not permitted
:
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp59379 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.3 --server-port 5500 namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool rbd --rbd-image disk01
Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e: Operation not permitted
exit code: 1
[root@nor1devlcph01 ceph-nvmeof]# cephnvmf namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool nvmeoftest --rbd-image disk01
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman run --name=ceph-nvmeof_nvmeof-cli_tmp64567 --rm -i --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof-cli --net ceph-nvmeof_default --network-alias nvmeof-cli --tty quay.io/ceph/nvmeof-cli:1.0.0 --server-address 192.168.13.3 --server-port 5500 namespace add --subsystem nqn.2016-06.io.spdk:nor1devlcph01 --rbd-pool nvmeoftest --rbd-image disk01
Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7: Operation not permitted
exit code: 1
A few interesting clues in the logs:
- As the
ceph
container is coming online it seems like it is creating therbd
pool which is strange as it exists already:
2024-03-19T14:50:04.370+0000 7f63c0e86640 -1 WARNING: all dangerous and experimental features are enabled.
pool 'rbd' created
[ceph] | ceph dashboard nvmeof-gateway-add -i /dev/fd/63 nvmeof.1
- in the
nvmeof
container log I noticed this:
INFO:nvmeof:Connected to Ceph with version "18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)"
But my actual ceph version is older:
ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Full logs:
-
ceph
container:
[root@nor1devlcph01 ceph-nvmeof]# podman-compose up ceph --abort-on-container-exit --exit-code-from ceph --remove-orphans
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
** excluding: {'nvmeof', 'nvmeof-devel', 'spdk-rpm-export', 'bdevperf', 'ceph-devel', 'nvmeof-python-export', 'spdk', 'nvmeof-cli', 'discovery', 'nvmeof-builder', 'nvmeof-base', 'nvmeof-builder-base', 'ceph-base'}
['podman', 'inspect', '-t', 'image', '-f', '{{.Id}}', 'quay.io/ceph/vstart-cluster:18.2.1']
['podman', 'ps', '--filter', 'label=io.podman.compose.project=ceph-nvmeof', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
** skipping: ceph-nvmeof_spdk_1
** skipping: ceph-nvmeof_spdk-rpm-export_1
** skipping: ceph-nvmeof_bdevperf_1
** skipping: ceph-nvmeof_ceph-base_1
podman volume inspect ceph-nvmeof_ceph-conf || podman volume create ceph-nvmeof_ceph-conf
['podman', 'volume', 'inspect', 'ceph-nvmeof_ceph-conf']
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman create --name=ceph --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=ceph -e TOUCHFILE=/tmp/ceph.touch -v ceph-nvmeof_ceph-conf:/etc/ceph --net ceph-nvmeof_default --network-alias ceph --ip=192.168.13.2 --ip6=2001:db8::2 --ulimit nofile=1024 --entrypoint ["sh", "-c", "./vstart.sh --new $CEPH_VSTART_ARGS && ceph osd pool create rbd && echo ceph dashboard nvmeof-gateway-add -i <(echo nvmeof-devel:5500) nvmeof.1 && sleep infinity"] --healthcheck-command /bin/sh -c 'ceph osd pool stats rbd' --healthcheck-interval 3s --healthcheck-start-period 6s --healthcheck-retries 10 quay.io/ceph/vstart-cluster:18.2.1
c5cd43c79e72607ae5545164f66a2158c43ad157c57c97211874b3f039a5bb39
exit code: 0
** skipping: ceph-devel
** skipping: ceph-nvmeof_nvmeof-base_1
** skipping: ceph-nvmeof_nvmeof-builder-base_1
** skipping: ceph-nvmeof_nvmeof-builder_1
** skipping: ceph-nvmeof_nvmeof-python-export_1
** skipping: ceph-nvmeof_nvmeof-cli_1
** skipping: ceph-nvmeof_nvmeof_1
** skipping: ceph-nvmeof_discovery_1
** skipping: ceph-nvmeof_nvmeof-devel_1
** skipping: ceph-nvmeof_spdk_1
** skipping: ceph-nvmeof_spdk-rpm-export_1
** skipping: ceph-nvmeof_bdevperf_1
** skipping: ceph-nvmeof_ceph-base_1
podman start -a ceph
grep: CMakeCache.txt: No such file or directory
ceph-mgr dashboard not built - disabling.
rm -f core*
[ceph] | hostname c5cd43c79e72
[ceph] | ip 192.168.13.2
[ceph] | port 10000
/usr/bin/ceph-authtool --create-keyring --gen-key --name=mon. /etc/ceph/keyring --cap mon 'allow *'
** skipping: ceph-devel
** skipping: ceph-nvmeof_nvmeof-base_1
** skipping: ceph-nvmeof_nvmeof-builder-base_1
** skipping: ceph-nvmeof_nvmeof-builder_1
** skipping: ceph-nvmeof_nvmeof-python-export_1
** skipping: ceph-nvmeof_nvmeof-cli_1
** skipping: ceph-nvmeof_nvmeof_1
** skipping: ceph-nvmeof_discovery_1
** skipping: ceph-nvmeof_nvmeof-devel_1
[ceph] | creating /etc/ceph/keyring
/usr/bin/ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/keyring
/usr/bin/monmaptool --create --clobber --addv a [v2:192.168.13.2:10000,v1:192.168.13.2:10001] --print /tmp/ceph_monmap.7
[ceph] | /usr/bin/monmaptool: monmap file /tmp/ceph_monmap.7
[ceph] | /usr/bin/monmaptool: generated fsid 3c6a789c-610e-4230-9b56-697504564a68
[ceph] | setting min_mon_release = pacific
[ceph] | epoch 0
[ceph] | fsid 3c6a789c-610e-4230-9b56-697504564a68
[ceph] | last_changed 2024-03-19T14:49:30.615458+0000
[ceph] | created 2024-03-19T14:49:30.615458+0000
[ceph] | min_mon_release 16 (pacific)
[ceph] | election_strategy: 1
[ceph] | 0: [v2:192.168.13.2:10000/0,v1:192.168.13.2:10001/0] mon.a
[ceph] | /usr/bin/monmaptool: writing epoch 0 to /tmp/ceph_monmap.7 (1 monitors)
rm -rf -- /ceph/dev/mon.a
mkdir -p /ceph/dev/mon.a
/usr/bin/ceph-mon --mkfs -c /etc/ceph/ceph.conf -i a --monmap=/tmp/ceph_monmap.7 --keyring=/etc/ceph/keyring
rm -- /tmp/ceph_monmap.7
/usr/bin/ceph-mon -i a -c /etc/ceph/ceph.conf
Populating config ...
[ceph] |
[ceph] | [mgr]
[ceph] | mgr/telemetry/enable = false
[ceph] | mgr/telemetry/nag = false
[ceph] | creating /ceph/dev/mgr.x/keyring
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring -i /ceph/dev/mgr.x/keyring auth add mgr.x mon 'allow profile mgr' mds 'allow *' osd 'allow *'
added key for mgr.x
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring config set mgr mgr/prometheus/x/server_port 9283 --force
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring config set mgr mgr/restful/x/server_port 12000 --force
Starting mgr.x
/usr/bin/ceph-mgr -i x -c /etc/ceph/ceph.conf
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring mgr stat
[ceph] | false
waiting for mgr to become available
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring mgr stat
waiting for mgr to become available
[ceph] | false
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring mgr stat
[ceph] | false
waiting for mgr to become available
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring mgr stat
[ceph] | false
waiting for mgr to become available
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring mgr stat
[ceph] | true
[ceph] | add osd0 49eefa14-8493-4672-bfd3-bc3aa4eb6137
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring osd new 49eefa14-8493-4672-bfd3-bc3aa4eb6137 -i /ceph/dev/osd0/new.json
[ceph] | 0
[ceph] | /usr/bin/ceph-osd -i 0 -c /etc/ceph/ceph.conf --mkfs --key AQAEpvll/qBzABAApY9jikIM6sNBiUx2U4icIg== --osd-uuid 49eefa14-8493-4672-bfd3-bc3aa4eb6137
[ceph] | 2024-03-19T14:49:40.884+0000 7f9f97a394c0 -1 bluestore(/ceph/dev/osd0/block) _read_bdev_label failed to open /ceph/dev/osd0/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:40.884+0000 7f9f97a394c0 -1 bluestore(/ceph/dev/osd0/block) _read_bdev_label failed to open /ceph/dev/osd0/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:40.884+0000 7f9f97a394c0 -1 bluestore(/ceph/dev/osd0/block) _read_bdev_label failed to open /ceph/dev/osd0/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:40.885+0000 7f9f97a394c0 -1 bluestore(/ceph/dev/osd0) _read_fsid unparsable uuid
[ceph] | 2024-03-19T14:49:40.887+0000 7f9f97a394c0 -1 bdev(0x55a9277b0a80 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:40.889+0000 7f9f97a394c0 -1 bdev(0x55a9277b0700 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:40.890+0000 7f9f97a394c0 -1 bdev(0x55a9277b0000 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:40.891+0000 7f9f97a394c0 -1 bdev(0x55a9277b0e00 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
[ceph] | 2024-03-19T14:49:41.902+0000 7f9f97a394c0 -1 bdev(0x55a9277b0a80 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:41.903+0000 7f9f97a394c0 -1 bdev(0x55a9277b0000 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:41.903+0000 7f9f97a394c0 -1 bdev(0x55a9277b1880 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:41.904+0000 7f9f97a394c0 -1 bdev(0x55a9277b1500 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
[ceph] | 2024-03-19T14:49:42.668+0000 7f9f97a394c0 -1 bdev(0x55a9277b1880 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:42.668+0000 7f9f97a394c0 -1 bdev(0x55a9277b0000 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:42.669+0000 7f9f97a394c0 -1 bdev(0x55a9277b1500 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
[ceph] | 2024-03-19T14:49:44.167+0000 7f9f97a394c0 -1 bdev(0x55a9277b0a80 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:44.168+0000 7f9f97a394c0 -1 bdev(0x55a9277b0000 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:44.169+0000 7f9f97a394c0 -1 bdev(0x55a9277b1880 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:44.170+0000 7f9f97a394c0 -1 bdev(0x55a9277b1500 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
[ceph] | 2024-03-19T14:49:44.935+0000 7f9f97a394c0 -1 bdev(0x55a9277b1880 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:44.936+0000 7f9f97a394c0 -1 bdev(0x55a9277b0000 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:44.937+0000 7f9f97a394c0 -1 bdev(0x55a9277b1500 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
[ceph] | start osd.0
[ceph] | osd 0 /usr/bin/ceph-osd -i 0 -c /etc/ceph/ceph.conf
/usr/bin/ceph-osd -i 0 -c /etc/ceph/ceph.conf
[ceph] | add osd1 d00521c3-2867-4589-933c-22dbc6de1402
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring osd new d00521c3-2867-4589-933c-22dbc6de1402 -i /ceph/dev/osd1/new.json
2024-03-19T14:49:45.871+0000 7f47d52e34c0 -1 bdev(0x55cf03684a80 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:45.872+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
2024-03-19T14:49:45.872+0000 7f47d52e34c0 -1 bdev(0x55cf03684000 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:45.873+0000 7f47d52e34c0 -1 bdev(0x55cf03684e00 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
[ceph] | 1
[ceph] | /usr/bin/ceph-osd -i 1 -c /etc/ceph/ceph.conf --mkfs --key AQAJpvll089wMxAAUByNLQdztRnRo5JijpsedA== --osd-uuid d00521c3-2867-4589-933c-22dbc6de1402
[ceph] | 2024-03-19T14:49:46.739+0000 7fe406c724c0 -1 bluestore(/ceph/dev/osd1/block) _read_bdev_label failed to open /ceph/dev/osd1/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:46.739+0000 7fe406c724c0 -1 bluestore(/ceph/dev/osd1/block) _read_bdev_label failed to open /ceph/dev/osd1/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:46.739+0000 7fe406c724c0 -1 bluestore(/ceph/dev/osd1/block) _read_bdev_label failed to open /ceph/dev/osd1/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:46.741+0000 7fe406c724c0 -1 bluestore(/ceph/dev/osd1) _read_fsid unparsable uuid
[ceph] | 2024-03-19T14:49:46.742+0000 7fe406c724c0 -1 bdev(0x558fcc7e4a80 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:46.744+0000 7fe406c724c0 -1 bdev(0x558fcc7e4700 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:46.745+0000 7fe406c724c0 -1 bdev(0x558fcc7e4000 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:46.746+0000 7fe406c724c0 -1 bdev(0x558fcc7e4e00 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
2024-03-19T14:49:46.895+0000 7f47d52e34c0 -1 Falling back to public interface
2024-03-19T14:49:46.911+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:47.184+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:47.450+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:47.712+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:47.756+0000 7fe406c724c0 -1 bdev(0x558fcc7e5880 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:47.757+0000 7fe406c724c0 -1 bdev(0x558fcc7e5180 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:47.758+0000 7fe406c724c0 -1 bdev(0x558fcc7e4e00 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:47.759+0000 7fe406c724c0 -1 bdev(0x558fcc7e4700 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
2024-03-19T14:49:47.979+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:48.244+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:48.510+0000 7f47d52e34c0 -1 bdev(0x55cf03684700 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:48.511+0000 7f47d52e34c0 -1 bdev(0x55cf03684000 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
2024-03-19T14:49:48.517+0000 7f47d52e34c0 -1 bdev(0x55cf03684a80 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:48.518+0000 7f47d52e34c0 -1 bdev(0x55cf03685880 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
[ceph] | 2024-03-19T14:49:48.533+0000 7fe406c724c0 -1 bdev(0x558fcc7e4e00 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:48.534+0000 7fe406c724c0 -1 bdev(0x558fcc7e5180 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:48.535+0000 7fe406c724c0 -1 bdev(0x558fcc7e4700 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
2024-03-19T14:49:49.280+0000 7f47d52e34c0 -1 bdev(0x55cf03684a80 /ceph/dev/osd0/block.db) unable to get device name for /ceph/dev/osd0/block.db: (22) Invalid argument
2024-03-19T14:49:49.281+0000 7f47d52e34c0 -1 bdev(0x55cf03684000 /ceph/dev/osd0/block) unable to get device name for /ceph/dev/osd0/block: (22) Invalid argument
2024-03-19T14:49:49.282+0000 7f47d52e34c0 -1 bdev(0x55cf03685880 /ceph/dev/osd0/block.wal) unable to get device name for /ceph/dev/osd0/block.wal: (22) Invalid argument
2024-03-19T14:49:49.352+0000 7f47d52e34c0 -1 osd.0 0 log_to_monitors true
[ceph] | 2024-03-19T14:49:50.051+0000 7fe406c724c0 -1 bdev(0x558fcc7e5880 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:50.052+0000 7fe406c724c0 -1 bdev(0x558fcc7e5180 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:50.052+0000 7fe406c724c0 -1 bdev(0x558fcc7e4e00 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:50.053+0000 7fe406c724c0 -1 bdev(0x558fcc7e4700 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
[ceph] | 2024-03-19T14:49:50.819+0000 7fe406c724c0 -1 bdev(0x558fcc7e4e00 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:50.820+0000 7fe406c724c0 -1 bdev(0x558fcc7e5180 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:50.821+0000 7fe406c724c0 -1 bdev(0x558fcc7e4700 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
[ceph] | start osd.1
[ceph] | osd 1 /usr/bin/ceph-osd -i 1 -c /etc/ceph/ceph.conf
/usr/bin/ceph-osd -i 1 -c /etc/ceph/ceph.conf
[ceph] | add osd2 83ab10ac-ac84-4168-bdc8-ac155b19c8ca
/usr/bin/ceph -c /etc/ceph/ceph.conf -k /etc/ceph/keyring osd new 83ab10ac-ac84-4168-bdc8-ac155b19c8ca -i /ceph/dev/osd2/new.json
2024-03-19T14:49:51.872+0000 7f47deca84c0 -1 bdev(0x55a957b1ca80 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:51.873+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
2024-03-19T14:49:51.874+0000 7f47deca84c0 -1 bdev(0x55a957b1c000 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:51.874+0000 7f47deca84c0 -1 bdev(0x55a957b1ce00 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
[ceph] | 2
[ceph] | /usr/bin/ceph-osd -i 2 -c /etc/ceph/ceph.conf --mkfs --key AQAPpvll7v64MxAApMQ7CCsJfJmru2NoswkmiQ== --osd-uuid 83ab10ac-ac84-4168-bdc8-ac155b19c8ca
[ceph] | 2024-03-19T14:49:52.732+0000 7eff213f94c0 -1 bluestore(/ceph/dev/osd2/block) _read_bdev_label failed to open /ceph/dev/osd2/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:52.733+0000 7eff213f94c0 -1 bluestore(/ceph/dev/osd2/block) _read_bdev_label failed to open /ceph/dev/osd2/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:52.733+0000 7eff213f94c0 -1 bluestore(/ceph/dev/osd2/block) _read_bdev_label failed to open /ceph/dev/osd2/block: (2) No such file or directory
[ceph] | 2024-03-19T14:49:52.734+0000 7eff213f94c0 -1 bluestore(/ceph/dev/osd2) _read_fsid unparsable uuid
[ceph] | 2024-03-19T14:49:52.735+0000 7eff213f94c0 -1 bdev(0x55a358de6a80 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:52.737+0000 7eff213f94c0 -1 bdev(0x55a358de6700 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:52.738+0000 7eff213f94c0 -1 bdev(0x55a358de6000 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:52.739+0000 7eff213f94c0 -1 bdev(0x55a358de6e00 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
2024-03-19T14:49:52.905+0000 7f47deca84c0 -1 Falling back to public interface
2024-03-19T14:49:52.926+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:53.194+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:53.471+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:53.733+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:53.752+0000 7eff213f94c0 -1 bdev(0x55a358de7880 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:53.753+0000 7eff213f94c0 -1 bdev(0x55a358de7500 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:53.754+0000 7eff213f94c0 -1 bdev(0x55a358de7180 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:53.755+0000 7eff213f94c0 -1 bdev(0x55a358de6e00 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
2024-03-19T14:49:54.000+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:54.274+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:54.528+0000 7eff213f94c0 -1 bdev(0x55a358de7180 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:54.529+0000 7eff213f94c0 -1 bdev(0x55a358de7500 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:54.530+0000 7eff213f94c0 -1 bdev(0x55a358de6e00 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
2024-03-19T14:49:54.551+0000 7f47deca84c0 -1 bdev(0x55a957b1c700 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:54.552+0000 7f47deca84c0 -1 bdev(0x55a957b1c000 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
2024-03-19T14:49:54.568+0000 7f47deca84c0 -1 bdev(0x55a957b1ca80 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:54.569+0000 7f47deca84c0 -1 bdev(0x55a957b1d880 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
2024-03-19T14:49:55.095+0000 7f47deca84c0 -1 bdev(0x55a957b1ca80 /ceph/dev/osd1/block.db) unable to get device name for /ceph/dev/osd1/block.db: (22) Invalid argument
2024-03-19T14:49:55.095+0000 7f47deca84c0 -1 bdev(0x55a957b1c000 /ceph/dev/osd1/block) unable to get device name for /ceph/dev/osd1/block: (22) Invalid argument
2024-03-19T14:49:55.096+0000 7f47deca84c0 -1 bdev(0x55a957b1d880 /ceph/dev/osd1/block.wal) unable to get device name for /ceph/dev/osd1/block.wal: (22) Invalid argument
2024-03-19T14:49:55.161+0000 7f47deca84c0 -1 osd.1 0 log_to_monitors true
[ceph] | 2024-03-19T14:49:56.022+0000 7eff213f94c0 -1 bdev(0x55a358de7880 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:56.023+0000 7eff213f94c0 -1 bdev(0x55a358de7500 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:56.024+0000 7eff213f94c0 -1 bdev(0x55a358de7180 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:56.024+0000 7eff213f94c0 -1 bdev(0x55a358de6e00 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
[ceph] | 2024-03-19T14:49:56.791+0000 7eff213f94c0 -1 bdev(0x55a358de7180 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
[ceph] | 2024-03-19T14:49:56.792+0000 7eff213f94c0 -1 bdev(0x55a358de7500 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
[ceph] | 2024-03-19T14:49:56.793+0000 7eff213f94c0 -1 bdev(0x55a358de6e00 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
[ceph] | start osd.2
[ceph] | osd 2 /usr/bin/ceph-osd -i 2 -c /etc/ceph/ceph.conf
/usr/bin/ceph-osd -i 2 -c /etc/ceph/ceph.conf
2024-03-19T14:49:57.858+0000 7fa4573c74c0 -1 bdev(0x55dbd19caa80 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:49:57.859+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
2024-03-19T14:49:57.860+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca000 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:49:57.861+0000 7fa4573c74c0 -1 bdev(0x55dbd19cae00 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
2024-03-19T14:49:58.875+0000 7fa4573c74c0 -1 Falling back to public interface
2024-03-19T14:49:58.901+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:49:59.169+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:49:59.436+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:49:59.699+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:49:59.962+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:50:00.230+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:50:00.506+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca700 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:50:00.507+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca000 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
2024-03-19T14:50:00.521+0000 7fa4573c74c0 -1 bdev(0x55dbd19caa80 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:50:00.522+0000 7fa4573c74c0 -1 bdev(0x55dbd19cb880 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
2024-03-19T14:50:01.043+0000 7fa4573c74c0 -1 bdev(0x55dbd19caa80 /ceph/dev/osd2/block.db) unable to get device name for /ceph/dev/osd2/block.db: (22) Invalid argument
2024-03-19T14:50:01.044+0000 7fa4573c74c0 -1 bdev(0x55dbd19ca000 /ceph/dev/osd2/block) unable to get device name for /ceph/dev/osd2/block: (22) Invalid argument
2024-03-19T14:50:01.045+0000 7fa4573c74c0 -1 bdev(0x55dbd19cb880 /ceph/dev/osd2/block.wal) unable to get device name for /ceph/dev/osd2/block.wal: (22) Invalid argument
2024-03-19T14:50:01.121+0000 7fa4573c74c0 -1 osd.2 0 log_to_monitors true
OSDs started
vstart cluster complete. Use stop.sh to stop. See out/* (e.g. 'tail -f out/????') for debug output.
[ceph] |
[ceph] |
[ceph] |
[ceph] | export PYTHONPATH=/usr/share/ceph/mgr:/usr/lib64/ceph/cython_modules/lib.3:$PYTHONPATH
[ceph] | export LD_LIBRARY_PATH=/usr/lib64/ceph:$LD_LIBRARY_PATH
[ceph] | export PATH=/ceph/bin:$PATH
[ceph] | CEPH_DEV=1
2024-03-19T14:50:04.345+0000 7f63c0e86640 -1 WARNING: all dangerous and experimental features are enabled.
2024-03-19T14:50:04.370+0000 7f63c0e86640 -1 WARNING: all dangerous and experimental features are enabled.
pool 'rbd' created
[ceph] | ceph dashboard nvmeof-gateway-add -i /dev/fd/63 nvmeof.1
-
nvmeof
container:
[root@nor1devlcph01 ceph-nvmeof]# podman-compose up nvmeof --abort-on-container-exit --exit-code-from nvmeof --remove-orphans
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
** excluding: {'nvmeof-builder', 'ceph-devel', 'nvmeof-cli', 'nvmeof-builder-base', 'discovery', 'nvmeof-base', 'spdk', 'ceph-base', 'nvmeof-python-export', 'nvmeof-devel', 'spdk-rpm-export', 'bdevperf'}
['podman', 'inspect', '-t', 'image', '-f', '{{.Id}}', 'quay.io/ceph/nvmeof:1.0.0']
['podman', 'ps', '--filter', 'label=io.podman.compose.project=ceph-nvmeof', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
** skipping: ceph-nvmeof_spdk_1
** skipping: ceph-nvmeof_spdk-rpm-export_1
** skipping: ceph-nvmeof_bdevperf_1
** skipping: ceph-nvmeof_ceph-base_1
podman volume inspect ceph-nvmeof_ceph-conf || podman volume create ceph-nvmeof_ceph-conf
['podman', 'volume', 'inspect', 'ceph-nvmeof_ceph-conf']
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman create --name=ceph --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=ceph -e TOUCHFILE=/tmp/ceph.touch -v ceph-nvmeof_ceph-conf:/etc/ceph --net ceph-nvmeof_default --network-alias ceph --ip=192.168.13.2 --ip6=2001:db8::2 --ulimit nofile=1024 --entrypoint ["sh", "-c", "./vstart.sh --new $CEPH_VSTART_ARGS && ceph osd pool create rbd && echo ceph dashboard nvmeof-gateway-add -i <(echo nvmeof-devel:5500) nvmeof.1 && sleep infinity"] --healthcheck-command /bin/sh -c 'ceph osd pool stats rbd' --healthcheck-interval 3s --healthcheck-start-period 6s --healthcheck-retries 10 quay.io/ceph/vstart-cluster:18.2.1
Error: creating container storage: the container name "ceph" is already in use by c5cd43c79e72607ae5545164f66a2158c43ad157c57c97211874b3f039a5bb39. You have to remove that container to be able to reuse that name: that name is already in use
exit code: 125
** skipping: ceph-devel
** skipping: ceph-nvmeof_nvmeof-base_1
** skipping: ceph-nvmeof_nvmeof-builder-base_1
** skipping: ceph-nvmeof_nvmeof-builder_1
** skipping: ceph-nvmeof_nvmeof-python-export_1
** skipping: ceph-nvmeof_nvmeof-cli_1
podman volume inspect ceph-nvmeof_ceph-conf || podman volume create ceph-nvmeof_ceph-conf
['podman', 'volume', 'inspect', 'ceph-nvmeof_ceph-conf']
['podman', 'network', 'exists', 'ceph-nvmeof_default']
podman create --name=ceph-nvmeof_nvmeof_1 --requires=ceph --label io.podman.compose.config-hash=b67723df334cc193e5d32c14d7c9efb9b5625df1b9fb8705f72317519ea1cd9e --label io.podman.compose.project=ceph-nvmeof --label io.podman.compose.version=1.0.6 --label [email protected] --label com.docker.compose.project=ceph-nvmeof --label com.docker.compose.project.working_dir=/root/git/ceph-nvmeof --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nvmeof --cap-add SYS_ADMIN --cap-add CAP_SYS_NICE --cap-add SYS_PTRACE -v /dev/hugepages:/dev/hugepages -v /dev/vfio/vfio:/dev/vfio/vfio -v /root/git/ceph-nvmeof/ceph-nvmeof.conf:/src/ceph-nvmeof.conf -v /tmp/coredump:/tmp/coredump -v /var/log/ceph:/var/log/ceph -v ceph-nvmeof_ceph-conf:/etc/ceph:ro --net ceph-nvmeof_default --network-alias nvmeof -p 4420 -p 5500 -p 8009 -p 4420 -p 5500 -p 10008 --ulimit nofile=20480 --ulimit memlock=-1 --ulimit core=-1:-1 quay.io/ceph/nvmeof:1.0.0
baf0327f0e5999ba54de5740f80145b9887afebebf4fb2b36cbd27215fa42969
exit code: 0
** skipping: ceph-nvmeof_discovery_1
** skipping: ceph-nvmeof_nvmeof-devel_1
** skipping: ceph-nvmeof_spdk_1
** skipping: ceph-nvmeof_spdk-rpm-export_1
** skipping: ceph-nvmeof_bdevperf_1
** skipping: ceph-nvmeof_ceph-base_1
podman start -a ceph
** skipping: ceph-devel
** skipping: ceph-nvmeof_nvmeof-base_1
** skipping: ceph-nvmeof_nvmeof-builder-base_1
** skipping: ceph-nvmeof_nvmeof-builder_1
** skipping: ceph-nvmeof_nvmeof-python-export_1
** skipping: ceph-nvmeof_nvmeof-cli_1
podman start -a ceph-nvmeof_nvmeof_1
** skipping: ceph-nvmeof_discovery_1
** skipping: ceph-nvmeof_nvmeof-devel_1
INFO:nvmeof:Using configuration file /src/ceph-nvmeof.conf
INFO:nvmeof:====================================== Configuration file content ======================================
INFO:nvmeof:#
INFO:nvmeof:# Copyright (c) 2021 International Business Machines
INFO:nvmeof:# All rights reserved.
INFO:nvmeof:#
INFO:nvmeof:# SPDX-License-Identifier: LGPL-3.0-or-later
INFO:nvmeof:#
INFO:nvmeof:# Authors: [email protected], [email protected]
INFO:nvmeof:#
INFO:nvmeof:
INFO:nvmeof:[gateway]
INFO:nvmeof:name =
INFO:nvmeof:group =
INFO:nvmeof:addr = 0.0.0.0
INFO:nvmeof:port = 5500
INFO:nvmeof:enable_auth = False
INFO:nvmeof:state_update_notify = True
INFO:nvmeof:state_update_interval_sec = 5
INFO:nvmeof:enable_spdk_discovery_controller = False
INFO:nvmeof:#omap_file_lock_duration = 60
INFO:nvmeof:#omap_file_lock_retries = 15
INFO:nvmeof:#omap_file_lock_retry_sleep_interval = 5
INFO:nvmeof:#omap_file_update_reloads = 10
INFO:nvmeof:log_level=debug
INFO:nvmeof:bdevs_per_cluster = 32
INFO:nvmeof:#log_files_enabled = True
INFO:nvmeof:#log_files_rotation_enabled = True
INFO:nvmeof:#verbose_log_messages = True
INFO:nvmeof:#max_log_file_size_in_mb=10
INFO:nvmeof:#max_log_files_count=20
INFO:nvmeof:#
INFO:nvmeof:# Notice that if you change the log directory the log files will only be visible inside the container
INFO:nvmeof:#
INFO:nvmeof:#log_directory = /var/log/ceph/
INFO:nvmeof:#enable_prometheus_exporter = True
INFO:nvmeof:#prometheus_exporter_ssl = True
INFO:nvmeof:#prometheus_port = 10008
INFO:nvmeof:#prometheus_bdev_pools = rbd
INFO:nvmeof:#prometheus_stats_interval = 10
INFO:nvmeof:#verify_nqns = True
INFO:nvmeof:
INFO:nvmeof:[discovery]
INFO:nvmeof:addr = 0.0.0.0
INFO:nvmeof:port = 8009
INFO:nvmeof:
INFO:nvmeof:[ceph]
INFO:nvmeof:pool = rbd
INFO:nvmeof:config_file = /etc/ceph/ceph.conf
INFO:nvmeof:
INFO:nvmeof:[mtls]
INFO:nvmeof:server_key = ./server.key
INFO:nvmeof:client_key = ./client.key
INFO:nvmeof:server_cert = ./server.crt
INFO:nvmeof:client_cert = ./client.crt
INFO:nvmeof:
INFO:nvmeof:[spdk]
INFO:nvmeof:tgt_path = /usr/local/bin/nvmf_tgt
INFO:nvmeof:#rpc_socket_dir = /var/tmp/
INFO:nvmeof:#rpc_socket_name = spdk.sock
INFO:nvmeof:#tgt_cmd_extra_args = --env-context="--no-huge -m1024" --iova-mode=va
INFO:nvmeof:timeout = 60.0
INFO:nvmeof:log_level = WARN
INFO:nvmeof:
INFO:nvmeof:# Example value: -m 0x3 -L all
INFO:nvmeof:# tgt_cmd_extra_args =
INFO:nvmeof:
INFO:nvmeof:# transports = tcp
INFO:nvmeof:
INFO:nvmeof:# Example value: {"max_queue_depth" : 16, "max_io_size" : 4194304, "io_unit_size" : 1048576, "zcopy" : false}
INFO:nvmeof:transport_tcp_options = {"in_capsule_data_size" : 8192, "max_io_qpairs_per_ctrlr" : 7}
INFO:nvmeof:========================================================================================================
INFO:nvmeof:Starting gateway baf0327f0e59
DEBUG:nvmeof:Starting serve
INFO:nvmeof:First gateway: created object nvmeof.state
DEBUG:nvmeof:Configuring server baf0327f0e59
INFO:nvmeof:SPDK Target Path: /usr/local/bin/nvmf_tgt
INFO:nvmeof:SPDK Socket: /var/run/ceph/085ed122-72ef-4f65-acdc-7529d9e5bbe1/spdk.sock
INFO:nvmeof:Starting /usr/local/bin/nvmf_tgt -u -r /var/run/ceph/085ed122-72ef-4f65-acdc-7529d9e5bbe1/spdk.sock
INFO:nvmeof:SPDK process id: 37
INFO:nvmeof:Attempting to initialize SPDK: rpc_socket: /var/run/ceph/085ed122-72ef-4f65-acdc-7529d9e5bbe1/spdk.sock, conn_retries: 300, timeout: 60.0
INFO: Setting log level to WARN
INFO:JSONRPCClient(/var/run/ceph/085ed122-72ef-4f65-acdc-7529d9e5bbe1/spdk.sock):Setting log level to WARN
[2024-03-19 14:50:36.586074] Starting SPDK v23.01.1 / DPDK 22.11.1 initialization...
[2024-03-19 14:50:36.586411] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --no-pci --huge-unlink --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid37 ]
TELEMETRY: No legacy callbacks, legacy socket not created
[2024-03-19 14:50:36.728789] app.c: 712:spdk_app_start: *NOTICE*: Total cores available: 1
[2024-03-19 14:50:36.949256] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 0
[2024-03-19 14:50:37.020784] accel_sw.c: 681:sw_accel_module_init: *NOTICE*: Accel framework software module initialized.
DEBUG:nvmeof:create_transport: tcp options: {"in_capsule_data_size" : 8192, "max_io_qpairs_per_ctrlr" : 7}
[2024-03-19 14:50:37.159197] tcp.c: 629:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
INFO:nvmeof:Discovery service process id: 40
INFO:nvmeof:Starting ceph nvmeof discovery service
INFO:nvmeof:Using NVMeoF gateway version 1.0.0
INFO:nvmeof:Using SPDK version 23.01.1
INFO:nvmeof:Using vstart cluster version based on 18.2.1
INFO:nvmeof:NVMeoF gateway built on: 2024-01-31 13:36:14 UTC
INFO:nvmeof:NVMeoF gateway Git repository: https://github.com/ceph/ceph-nvmeof
INFO:nvmeof:NVMeoF gateway Git branch: tags/1.0.0
INFO:nvmeof:NVMeoF gateway Git commit: d08860d3a1db890b2c3ec9c8da631f1ded3b61b6
INFO:nvmeof:SPDK Git repository: https://github.com/ceph/spdk.git
INFO:nvmeof:SPDK Git branch: undefined
INFO:nvmeof:SPDK Git commit: 668268f74ea147f3343b9f8136df3e6fcc61f4cf
INFO:nvmeof:Connected to Ceph with version "18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)"
INFO:nvmeof:Requested huge pages count is 2048
INFO:nvmeof:Actual huge pages count is 2048
INFO:nvmeof:nvmeof.state omap object already exists.
INFO:nvmeof:log pages info from omap: nvmeof.state
INFO:nvmeof:discovery addr: 0.0.0.0 port: 8009
DEBUG:nvmeof:waiting for connection...
INFO:nvmeof:Prometheus endpoint is enabled
ERROR:control.prometheus:Unable to start prometheus exporter - missing cert/key file(s)
INFO:nvmeof:Received request to list the subsystem, context: <grpc._server._Context object at 0x7f6d2a9f95b0>
INFO:nvmeof:list_subsystems: []
INFO:nvmeof:Received request to create subsystem nqn.2016-06.io.spdk:nor1devlcph01, enable_ha: False, ana reporting: False, context: <grpc._server._Context object at 0x7f6d2aa01880>
INFO:nvmeof:No serial number specified for nqn.2016-06.io.spdk:nor1devlcph01, will use SPDK56572980226401
INFO:nvmeof:create_subsystem nqn.2016-06.io.spdk:nor1devlcph01: True
DEBUG:nvmeof:omap_key generated: subsystem_nqn.2016-06.io.spdk:nor1devlcph01
DEBUG:nvmeof:Update complete.
INFO:nvmeof:Received request to list the subsystem, context: <grpc._server._Context object at 0x7f6d2a9c8df0>
INFO:nvmeof:list_subsystems: [{'nqn': 'nqn.2016-06.io.spdk:nor1devlcph01', 'subtype': 'NVMe', 'listen_addresses': [], 'allow_any_host': False, 'hosts': [], 'serial_number': 'SPDK56572980226401', 'model_number': 'SPDK bdev Controller', 'max_namespaces': 256, 'min_cntlid': 1, 'max_cntlid': 65519, 'namespaces': []}]
DEBUG:nvmeof:value of sub-system: {
"subsystem_nqn": "nqn.2016-06.io.spdk:nor1devlcph01",
"serial_number": "SPDK56572980226401",
"max_namespaces": 256,
"ana_reporting": false,
"enable_ha": false
}
INFO:nvmeof:Subsystem nqn.2016-06.io.spdk:nor1devlcph01 enable_ha: False
INFO:nvmeof:Received request to add a namespace to nqn.2016-06.io.spdk:nor1devlcph01, context: <grpc._server._Context object at 0x7f6d2a9dea60>
INFO:nvmeof:Received request to create bdev bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7 from nvmeoftest/disk01 (size 0 MiB) with block size 512, will not create image if doesn't exist
INFO:nvmeof:Allocating cluster name='cluster_context_0'
[2024-03-19 14:57:56.144147] bdev_rbd.c: 293:bdev_rbd_init_context: *ERROR*: Failed to create ioctx on rbd=0x23bce00
[2024-03-19 14:57:56.144326] bdev_rbd.c: 335:bdev_rbd_init: *ERROR*: Cannot init rbd context for rbd=0x23bce00
[2024-03-19 14:57:56.144353] bdev_rbd.c:1170:bdev_rbd_create: *ERROR*: Failed to init rbd device
ERROR:nvmeof:bdev_rbd_create bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7 failed with:
request:
{
"pool_name": "nvmeoftest",
"rbd_name": "disk01",
"block_size": 512,
"name": "bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7",
"cluster_name": "cluster_context_0",
"uuid": "d62817d2-a86e-4866-9b21-e90c5f6db0c7",
"method": "bdev_rbd_create",
"req_id": 6
}
Got JSON-RPC error response
response:
{
"code": -1,
"message": "Operation not permitted"
}
ERROR:nvmeof:Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7: Operation not permitted
[2024-03-19 14:57:56.151466] bdev.c:7158:spdk_bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7
[2024-03-19 14:57:56.151502] bdev_rpc.c: 866:rpc_bdev_get_bdevs: *ERROR*: bdev 'bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7' does not exist
ERROR:nvmeof:Got exception while getting bdev bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7 info
Traceback (most recent call last):
File "/src/control/grpc.py", line 892, in get_bdev_info
bdevs = rpc_bdev.bdev_get_bdevs(self.spdk_rpc_client, name=bdev_name)
File "/usr/lib/python3.9/site-packages/spdk/rpc/bdev.py", line 1553, in bdev_get_bdevs
return client.call('bdev_get_bdevs', params)
File "/usr/lib/python3.9/site-packages/spdk/rpc/client.py", line 203, in call
raise JSONRPCException(msg)
spdk.rpc.client.JSONRPCException: request:
{
"name": "bdev_d62817d2-a86e-4866-9b21-e90c5f6db0c7",
"method": "bdev_get_bdevs",
"req_id": 7
}
Got JSON-RPC error response
response:
{
"code": -19,
"message": "No such device"
}
INFO:nvmeof:Received request to add a namespace to nqn.2016-06.io.spdk:nor1devlcph01, context: <grpc._server._Context object at 0x7f6d2a9d02e0>
INFO:nvmeof:Received request to create bdev bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e from rbd/disk01 (size 0 MiB) with block size 512, will not create image if doesn't exist
[2024-03-19 14:58:51.924801] bdev_rbd.c: 299:bdev_rbd_init_context: *ERROR*: Failed to open specified rbd device
[2024-03-19 14:58:51.925254] bdev_rbd.c: 335:bdev_rbd_init: *ERROR*: Cannot init rbd context for rbd=0x23bd130
[2024-03-19 14:58:51.925292] bdev_rbd.c:1170:bdev_rbd_create: *ERROR*: Failed to init rbd device
ERROR:nvmeof:bdev_rbd_create bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e failed with:
request:
{
"pool_name": "rbd",
"rbd_name": "disk01",
"block_size": 512,
"name": "bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e",
"cluster_name": "cluster_context_0",
"uuid": "94032bc0-9432-4b1c-b55d-273aa0cac42e",
"method": "bdev_rbd_create",
"req_id": 8
}
Got JSON-RPC error response
response:
{
"code": -1,
"message": "Operation not permitted"
}
ERROR:nvmeof:Failure adding namespace to nqn.2016-06.io.spdk:nor1devlcph01: Failure creating bdev bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e: Operation not permitted
[2024-03-19 14:58:51.929381] bdev.c:7158:spdk_bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e
[2024-03-19 14:58:51.929423] bdev_rpc.c: 866:rpc_bdev_get_bdevs: *ERROR*: bdev 'bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e' does not exist
ERROR:nvmeof:Got exception while getting bdev bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e info
Traceback (most recent call last):
File "/src/control/grpc.py", line 892, in get_bdev_info
bdevs = rpc_bdev.bdev_get_bdevs(self.spdk_rpc_client, name=bdev_name)
File "/usr/lib/python3.9/site-packages/spdk/rpc/bdev.py", line 1553, in bdev_get_bdevs
return client.call('bdev_get_bdevs', params)
File "/usr/lib/python3.9/site-packages/spdk/rpc/client.py", line 203, in call
raise JSONRPCException(msg)
spdk.rpc.client.JSONRPCException: request:
{
"name": "bdev_94032bc0-9432-4b1c-b55d-273aa0cac42e",
"method": "bdev_get_bdevs",
"req_id": 9
}
Got JSON-RPC error response
response:
{
"code": -19,
"message": "No such device"
}