node-maintenance-operator icon indicating copy to clipboard operation
node-maintenance-operator copied to clipboard

Support for other operators using the NodeMaintenance resource.

Open rohantmp opened this issue 6 years ago • 13 comments

I was wondering if there was any good reason other operators couldn't respond to the node maintenance CRD and make decisions about what to do with resources tied to the node (like local storage).

The real life example I was considering would require us to add an optional field to the MaintenanceCRD where the length of the maintenance is estimated.

Flow:

  • A storage operator has replicated data on the nodes and wants to put a node in maintenance.
  • User puts a node into Maintenance by creating a NodeMaintenance object.
  • In the NM object, the user estimates how many minutes the Node is going to be in maintenance for.
  • The storage on that node goes down. The storage operator must choose whether to recreate the replicated data elsewhere (to maintain the # of replicas) or to wait for the node to come up.
    • If the estimated maintenance time > (the estimated time to recreate the replicated data + SOME_OFFSET), then recreate the data elsewhere
    • else, wait for the node to come up.
      • if the node does not come up within that time limit, recreate the data elsewhere.

rohantmp avatar Apr 25 '19 06:04 rohantmp

  • Can we include features in the CRD for consumption by other controllers?
  • Can we deploy the CRD developed here without it's controller? (to avoid dependency)

rohantmp avatar Apr 25 '19 06:04 rohantmp

Can we include features in the CRD for consumption by other controllers?

Adding a timeout buffer for each node maintenance invocation, e.g. creating a CR to initiate maintenance with an optional timeout field might be a reasonable thing to do, not only for storage nodes, but for general use. @aglitke @rmohr @MarSik - any thoughts here ?

Can we deploy the CRD developed here without it's controller? (to avoid dependency)

Not sure I completely understand the question here. If the intention is to create the CRD without deploying the operator itself (e.g. the controller) then yes, it is possible. What dependency are you trying to avoid?

yanirq avatar Apr 28 '19 08:04 yanirq

Rather than a timeout, I was thinking of a user-estimate of how long the Maintenance was going to be for, so that we can decide whether to wait for the node (with it's disks) to come up or recreate the data elsewhere from other replicas.

I'm thinking of consuming the CRD as a general way to signal NodeMaintenance for our storage operator with or without the operator. Preferably, the node maintenance operator would also be deployed alongside it, but I'm imagining our consumption of it wouldn't be affected by it's absence.

rohantmp avatar Apr 29 '19 06:04 rohantmp

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar Jul 28 '19 07:07 kubevirt-bot

/remove-lifecycle stale

yanirq avatar Jul 28 '19 10:07 yanirq

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar Oct 26 '19 11:10 kubevirt-bot

/remove-lifecycle stale

yanirq avatar Oct 27 '19 09:10 yanirq

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar Jan 25 '20 10:01 kubevirt-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot avatar Feb 24 '20 11:02 kubevirt-bot

/remove-lifecycle stale

MarSik avatar Feb 24 '20 11:02 MarSik

/remove-lifecycle rotten

MarSik avatar Feb 24 '20 11:02 MarSik

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar May 24 '20 12:05 kubevirt-bot

I understand this is still an interesting feature

/remove-lifecycle stale /lifecycle frozen

slintes avatar Jun 03 '20 09:06 slintes