Support for other operators using the NodeMaintenance resource.
I was wondering if there was any good reason other operators couldn't respond to the node maintenance CRD and make decisions about what to do with resources tied to the node (like local storage).
The real life example I was considering would require us to add an optional field to the MaintenanceCRD where the length of the maintenance is estimated.
Flow:
- A storage operator has replicated data on the nodes and wants to put a node in maintenance.
- User puts a node into Maintenance by creating a NodeMaintenance object.
- In the NM object, the user estimates how many minutes the Node is going to be in maintenance for.
- The storage on that node goes down. The storage operator must choose whether to recreate the replicated data elsewhere (to maintain the # of replicas) or to wait for the node to come up.
- If the estimated maintenance time > (the estimated time to recreate the replicated data + SOME_OFFSET), then recreate the data elsewhere
- else, wait for the node to come up.
- if the node does not come up within that time limit, recreate the data elsewhere.
- Can we include features in the CRD for consumption by other controllers?
- Can we deploy the CRD developed here without it's controller? (to avoid dependency)
Can we include features in the CRD for consumption by other controllers?
Adding a timeout buffer for each node maintenance invocation, e.g. creating a CR to initiate maintenance with an optional timeout field might be a reasonable thing to do, not only for storage nodes, but for general use. @aglitke @rmohr @MarSik - any thoughts here ?
Can we deploy the CRD developed here without it's controller? (to avoid dependency)
Not sure I completely understand the question here. If the intention is to create the CRD without deploying the operator itself (e.g. the controller) then yes, it is possible. What dependency are you trying to avoid?
Rather than a timeout, I was thinking of a user-estimate of how long the Maintenance was going to be for, so that we can decide whether to wait for the node (with it's disks) to come up or recreate the data elsewhere from other replicas.
I'm thinking of consuming the CRD as a general way to signal NodeMaintenance for our storage operator with or without the operator. Preferably, the node maintenance operator would also be deployed alongside it, but I'm imagining our consumption of it wouldn't be affected by it's absence.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
I understand this is still an interesting feature
/remove-lifecycle stale /lifecycle frozen