Linstor primary storages are not created for disabled hosts
ISSUE TYPE
- Bug Report
COMPONENT NAME
primary storage
Linstor
CLOUDSTACK VERSION
4.18.1
CONFIGURATION
Linstor
OS / ENVIRONMENT
SUMMARY
When adding a linstor primary storage it not added for disabled host i.e missing the entries in the table 'storage_pool_host_ref'. Once the host is enabled later the host can't be used to deploy to this primary storage. The issue appears likewise when a host is added when a linstor primary storage is disabled.
STEPS TO REPRODUCE
1. Disable a host in a cluster
2. Add a linstor primary storage
3. Check the table storage_pool_host_ref or enable the host and try a new instance deployment ( check the logs to confirm) or use tags to force it.
EXPECTED RESULTS
Disabled host/primary storage needs to be considered while adding host/primary storage.
@rp- are you aware? can you triage this?
From what I tested this isn't restricted to Linstor, this affects all primary storage driver (or at least, CloudStackPrimary.., Linstor, StorPool, ...)
And I think it is caused by filtering for enabled hosts while attaching the storage pool:
List<HostVO> hosts = _resourceMgr.listAllUpAndEnabledHostsInOneZoneByHypervisor(hypervisorType, scope.getScopeId());
I also noticed, that restarting the managment server, will add missing storage_pool_host_ref entries (as long as the hosts are enabled then)
@rp- @rajujith is there anything to do on this, or do we live with the workaround?
From what I saw, this would need to be fixed for all primary storages. But I'm not sure if we would want to add primary storages while the are disabled? or If there should be some functionality/hook that would add missing storages while enabling the host again...
From my point of view, the workaround is probably good enough for now ;)
From my point of view, the workaround is probably good enough for now ;)
@rajujith ?
@DaanHoogland It's good to have a workaround but it breaks the consistency if not fixed, for non-managed primary storages it is added for disabled hosts as well.
@DaanHoogland It's good to have a workaround but it breaks the consistency if not fixed, for non-managed primary storages it is added for disabled hosts as well.
cc @rp-
Hi all, This also affects NFs and StorPool as primary storage (probably all storage plug-ins ). The fix should be general than for each storage plugin, and when a host is enabled, to add those pools. @rajujith, why should primary storage have to be created on a disabled host?
Another workaround is also not restarting the management service, but the agent service (or force reconnect via the UI)
Hi all, This also affects NFs and StorPool as primary storage (probably all storage plug-ins ). The fix should be general than for each storage plugin, and when a host is enabled, to add those pools. @rajujith, why should primary storage have to be created on a disabled host?
Are you suggesting an implementation, @slavkap ?
Hi all, This also affects NFs and StorPool as primary storage (probably all storage plug-ins ). The fix should be general than for each storage plugin, and when a host is enabled, to add those pools. @rajujith, why should primary storage have to be created on a disabled host?
@slavkap once a host is disabled in CloudStack it only means that new resources will not be allocated on that host, but the host is still managed and connected to management server. There is no reason not to add a new primary storage , we can avoid adding a primary storage if the host is in maintenance or disconnected state. If we decide not to add a new primary storage while the host is disabled, it should be added when it is enabled later. This can be fixed in general , I happen to notice with Linstor.
@rajujith, thank you for the explanation! From what I saw, this issue covers all of the storage plug-ins. I don't know how each would like to handle this one. Probably, a general solution to add the primary storage to the host when the host is enabled will be better
@DaanHoogland, I can work on a general fix (mentioned by @rajujith) if we decide it will be the best option.
@DaanHoogland, I can work on a general fix (mentioned by @rajujith) if we decide it will be the best option.
@slavkap , don’t feel pushed into something. I was really wondering if you meant your remark as a proposal.