Add tech preview of local storage in cinder
- Create volume group on third supplied block device
- Checkout cinder dm-clone driver
- Add compute nodes to
storagegroup - Deploy cinder lvm backend and iscsi containers
- Add and enable local backend using dm-clone driver
- Add volume type for local backend
This still needs to be tested
@berendt Fo now I have used the third supplied block device as local storage. From my understanding it is only used for ceph when explicitly requested. Should I add additional storage for the dm-clone driver in the testbed terraform or should I make its deployment optional and only deploy when the third device is not used for ceph :thinking:
@janhorstmann Please fix the Ansible Lint issues.
This still needs to be tested
@berendt Fo now I have used the third supplied block device as local storage. From my understanding it is only used for ceph when explicitly requested. Should I add additional storage for the dm-clone driver in the testbed terraform or should I make its deployment optional and only deploy when the third device is not used for ceph 🤔
Test was okay. The issue with the third block device still exists.
This still needs to be tested @berendt Fo now I have used the third supplied block device as local storage. From my understanding it is only used for ceph when explicitly requested. Should I add additional storage for the dm-clone driver in the testbed terraform or should I make its deployment optional and only deploy when the third device is not used for ceph 🤔
Test was okay. The issue with the third block device still exists.
As discussed, the third block device will be used for volume type local
On my last testbed deployment the volume group for the local storage was not created. The playbook itself worked fine, but it was not created automatically. I will draft this again, until I have time to investigate.
On my last testbed deployment the volume group for the local storage was not created. The playbook itself worked fine, but it was not created automatically. I will draft this again, until I have time to investigate.
I made a habit out of running ceph and infrastructure deployment in parallel since they were independent of each other. In this case it lead to a race between environments/custom/playbook-wipe-partitions.yml and environments/custom/playbook-cinder-driver-dm-clone.yml.
I have moved the latter into scripts/deploy/300-openstack-services.sh now.