piraeus-operator icon indicating copy to clipboard operation
piraeus-operator copied to clipboard

Display the node name of which controller is on, instead of the pod name

Open alexzhc opened this issue 5 years ago • 3 comments
trafficstars

Currently, the operator uses controller pod name and ip to register it, which is unintuitive and does not show its locality directly:

# linstor node list 
+------------------------------------------------------------------------------------------------+
| Node                                      | NodeType   | Addresses                    | State  |
|================================================================================================|
| k8s-worker-1                              | SATELLITE  | 192.168.176.191:3366 (PLAIN) | Online |
| k8s-worker-2                              | SATELLITE  | 192.168.176.192:3366 (PLAIN) | Online |
| k8s-worker-3                              | SATELLITE  | 192.168.176.193:3366 (PLAIN) | Online |
| piraeus-op-cs-controller-6fbd7b7888-ngs45 | CONTROLLER | 172.29.69.195:3366 (PLAIN)   | Online |
+------------------------------------------------------------------------------------------------+

The ideal would be display the node name of which controller is on, instead of the pod name I tried to tweek it to use spec.nodeName and status.hostIP, but somehow linstor does not allow registration using containerPort. Changing controller to use hostNetwork solves the problem but could be an overshoot. Is there anyway to do it cleanly?

alexzhc avatar Aug 10 '20 04:08 alexzhc

The ideal would be display the node name of which controller is on, instead of the pod name

Why is this important? I can understand it for the satellites. Once #56 is done, it shouldn't really matter where the controller pod is running. With the current set-up I can quickly determine the right pod for kubectl logs if something is wrong.

I tried to tweek it to use spec.nodeName and status.hostIP, but somehow linstor does not allow registration using containerPort. > Changing controller to use hostNetwork solves the problem but could be an overshoot.

Not really sure whats happening there. However I don't see why we should use a containerPort <-> hostPort mapping. Again, the controller is not tied to any specific host, so it makes sense to just use simple pods.

WanzenBug avatar Aug 17 '20 10:08 WanzenBug

  1. Accessing Linstor API outside of K8S requires the IP of the node on which linstor controller is, for example "standalone linstor cmd client", etc.

  2. piraeus-operator is actually using containerPort<-> hostPort mapping for linstor controller.

ports:
        - containerPort: 3376
          hostPort: 3376
          protocol: TCP
        - containerPort: 3377
          hostPort: 3377
          protocol: TCP
        - containerPort: 3370
          hostPort: 3370
          protocol: TCP
        - containerPort: 3371
          hostPort: 3371
          protocol: TCP

alexzhc avatar Aug 18 '20 07:08 alexzhc

  1. Accessing Linstor API outside of K8S requires the IP of the node on which linstor controller is, for example "standalone linstor cmd client", etc.

Should this not be handled by the appropriate k8s concept:

  • We already use a service for k8s internal communication with linstor. Can we reuse that in some way?
  • It may be useful to add an Ingress resource?
  1. piraeus-operator is actually using containerPort<-> hostPort mapping for linstor controller.

True. I plan to change that in #56. I don't see why it would be needed. Access should happen via the service (which I also plan to change. It should act as a proxy, so it should get a stable IP address)

WanzenBug avatar Aug 18 '20 07:08 WanzenBug

No longer relevant, in Operator v2 we only run a single controller, which is not registered with the cluster itself.

WanzenBug avatar Jun 14 '23 07:06 WanzenBug