ceph-ansible
ceph-ansible copied to clipboard
ceph_osd: Stop using tmpfs for /var/lib/ceph/osd/ceph-*
This is needed to avoid error when running systemd-run script
Could you sign-off your commit. BTW what kind of error did you encounter while not using --no-tmpfs?
This script first runs lvm active with tmpfs(keyring, block link, etc) then container remove after that different container try start osd without keyring, block link etc.
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions.
@jnemeiksis it is not clear to me why you need this change, could you elaborate a bit more please? (with concrete examples?)
- In this step one container creates files on tmpfs and after that it shutdown. I mean that created files disappear also
[root@sandbox-osd1 ~]# ls -la /var/lib/ceph/osd/ceph-0/
total 44
drwxr-xr-x 2 167 167 4096 Apr 12 11:34 .
drwxr-xr-x 7 167 167 4096 Apr 15 09:58 ..
lrwxrwxrwx 1 167 167 93 Apr 12 11:13 block -> /dev/ceph-bf5823d3-2253-4eb3-b082-3b8ab2bafb32/osd-block-be00c836-4221-43b1-b5cb-c90bf2113cb0
-rw------- 1 167 167 37 Apr 12 11:13 ceph_fsid
-rw------- 1 167 167 37 Apr 12 11:13 fsid
-rw------- 1 167 167 55 Apr 12 11:13 keyring
-rw------- 1 167 167 6 Apr 12 11:13 ready
-rw------- 1 167 167 3 Apr 11 13:33 require_osd_release
-rwx------ 1 167 167 1558 Apr 12 11:34 run
-rw------- 1 167 167 10 Apr 12 11:13 type
-rw------- 1 167 167 2 Apr 12 11:13 whoami
- In this step ceph-osd-0 container tries to start and hasn't get required files(block link, keyring, etc)
btw, @guits you've mentioned in slack chat that cephadm also use --no-tmpfs
I've tested on cephadm and this is what I mean, ceph-volume lvm activate with flag --no-tmpfs
[root@jnm-test ~]# cat /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/osd.0/unit.run
set -e
/bin/install -d -m0770 -o 167 -g 167 /var/run/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19
# LVM OSDs use ceph-volume lvm activate
! /bin/docker rm -f ceph-5ebf2f38-11e1-11ef-97c9-fa163e8b0e19-osd.0-activate 2> /dev/null
! /bin/docker rm -f ceph-5ebf2f38-11e1-11ef-97c9-fa163e8b0e19-osd-0-activate 2> /dev/null
/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init --name ceph-5ebf2f38-11e1-11ef-97c9-fa163e8b0e19-osd-0-activate -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:b029184acaa7acd85c8dac6d453468c3fa30c2bd17c554beaf19b6e8e3bf309f -e NODE_NAME=jnm-test -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19:/var/run/ceph:z -v /var/log/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19:/var/log/ceph:z -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/osd.0:/var/lib/ceph/osd/ceph-0:z -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/osd.0/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/selinux:/sys/fs/selinux:ro -v /:/rootfs quay.io/ceph/ceph@sha256:b029184acaa7acd85c8dac6d453468c3fa30c2bd17c554beaf19b6e8e3bf309f activate --osd-id 0 --osd-uuid 78ba3d98-bf95-4222-a490-4afe5775a7f2 --no-systemd --no-tmpfs
# osd.0
! /bin/docker rm -f ceph-5ebf2f38-11e1-11ef-97c9-fa163e8b0e19-osd.0 2> /dev/null
! /bin/docker rm -f ceph-5ebf2f38-11e1-11ef-97c9-fa163e8b0e19-osd-0 2> /dev/null
/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-5ebf2f38-11e1-11ef-97c9-fa163e8b0e19-osd-0 --pids-limit=0 -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:b029184acaa7acd85c8dac6d453468c3fa30c2bd17c554beaf19b6e8e3bf309f -e NODE_NAME=jnm-test -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19:/var/run/ceph:z -v /var/log/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19:/var/log/ceph:z -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/osd.0:/var/lib/ceph/osd/ceph-0:z -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/osd.0/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/5ebf2f38-11e1-11ef-97c9-fa163e8b0e19/selinux:/sys/fs/selinux:ro -v /:/rootfs quay.io/ceph/ceph@sha256:b029184acaa7acd85c8dac6d453468c3fa30c2bd17c554beaf19b6e8e3bf309f -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true '--default-log-stderr-prefix=debug '
no need for this version. Close PR.