Moritz Wanzenböck
Moritz Wanzenböck
Please open an issue over at https://github.com/linbit/linstor-server Does the issue happen right at start up? If not, have you tried restarting the Pod?
You can edit the `piraeus-operator-image-config` ConfigMap which holds the image information. You need to change the `linstor-satellite` and `linstor-controller` tag.
By the way, the original issue was only with the livenessprobe for the SpaceTracking service, you could go back to 1.23.0 and patch the deployment to remove the livenessProbe. Something...
Deleting the finalizer will remove them from the operator memory. You would then need to manually run `linstor node lost ` if the node still exists in LINSTOR to get...
It will appear again, as long as: * The node exists in Kubernetes (`kubectl get nodes`) * The node is *not* excluded by the [`spec.nodeSelector`](https://github.com/piraeusdatastore/piraeus-operator/blob/v2/docs/reference/linstorcluster.md#specnodeselector) on the `LinstorCluster` resource.
What does the `LinstorCluster` resource look like? Does it have a `spec.nodeSelector` set? Then, can you show the labels on one of the affected nodes?
> If it was supporting nodeAffinity, you'd be able to exclude.. Please open a feature request, should not be too hard to implement :) > I did not have the...
Yes, it's currently a (bad) reimplementation of the Kubernetes scheduler. The reason is: we need to support every node having a slightly different Pod spec for the satellite. That is...
Try running a `linstor node lost ....` that should cause LINSTOR to forcefully remove the node from all DRBD configurations.
Thank you for the report. We are aware of the potential impact of a compromised piraeus-operator-controller-manager deployment. However, we see limited possibility to change that: The Piraeus Operator manages all...