cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

[occm] For dual stack deployments, node addresses show only one IP if node-ip parameter is passed

Open royanirban76 opened this issue 3 years ago • 5 comments

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened: Case1: With k8s 1.23.x and cloud-provider-openstack 1.23.x if we pass "--cloud-provider=external" along with "--node-ip=10.0.16.2" as KUBELET_EXTRA_ARGS to the kubelet.service file we observe that the Node.status.addresses was as follows status addresses:

  • address: 10.0.16.2 type: InternalIP
  • address: dead::5 type: InternalIP

Case:2 But with the current k8s 1.24.x and cloud-provider-openstack 1.24.x, with the same config we see only the node IP that was passed status addresses:

  • address: 10.0.16.5 type: InternalIP

Case3: We tried with k8s 1.24.x and cloud-provider-openstack 1.23.x and the behavior is same as it was in the earlier release. status addresses:

  • address: 10.0.16.4 type: InternalIP
  • address: dead::2 type: InternalIP

What you expected to happen: The latest 1.24.x was expected to show the same behavior as in Case1 and 3.

How to reproduce it: Case2 above.

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager(or other related binary) version:
  • OpenStack version:
  • Others:

royanirban76 avatar Jul 29 '22 09:07 royanirban76

With k8s 1.23.x and cloud-provider-openstack 1.23.x if we pass "--cloud-provider=external" along with "--node-ip=10.0.16.2" as KUBELET_EXTRA_ARGS to the kubelet.service file we observe that the Node.status.addresses was as follows

I didn't see IPv6 param you input , is --node-ip=10.0.16.2 should only give one ip address ? I am guessing maybe some changes in 1.24 removed the honor of IPv6 but not sure where ..

jichenjc avatar Aug 01 '22 01:08 jichenjc

Hi @jichenjc, During 1.23.x we did not had to pass both the IPs. Also as we are using openstack as our cloud provider, we see an error during kubelet service startup if we pass both the IPs and the "--cloud-provider=external". The understanding was at Node.status we get all the IPs provided by openstack and we can pass one more as "--node-ip=<IP address>". But now (1.24.x) we see the behavior has changed, we get only the IP we passed as "--node-ip=<IP address>" and the openstack provided IPs are ignored.

royanirban76 avatar Aug 01 '22 06:08 royanirban76

@jichenjc Is there any other logs/ information needed for triaging this issue ?

chandanD4 avatar Aug 04 '22 13:08 chandanD4

@jichenjc One more observation, for a dual-stack default IPv6 deployment (k8s 1.24.2 and cloud-provider 1.24.1), if I pass "--node-ip=::" with "--cloud-provider=external"

  1. I see Node.status.addresses has both the addresses (order is IPv4 first and the IPv6)
  2. But for PODs kube-apiserver-master-, kube-controller-manager- and kube-scheduler- if we describe the pods, under Status.IPs we see only one IP, the IPv6 address of the node. In 1.23.3 cloud-provider Status.IPs had both the IPs present.

royanirban76 avatar Aug 08 '22 07:08 royanirban76

But for PODs kube-apiserver-master-, kube-controller-manager- and kube-scheduler- if we describe the pods,

um.. I think at least the pod IP doesn't belong to CPO scope , CPO only focus on node related stuffs mostly

One more observation, for a dual-stack default IPv6 deployment (k8s 1.24.2 and cloud-provider 1.24.1), if I pass "--node-ip=::" with "--cloud-provider=external"

I double checked the our recent code, seems CPO doesn't have logic change, the node-ip seems comes from https://github.com/kubernetes/cloud-provider/blob/master/controllers/node/node_controller.go#L369 and the logic seems to be node-ip first then from nodeAddress function in CPO I think you might want to check in cloud-provider repo whether anyone can help the question

just some thoughts as don't have env to debug on ..

jichenjc avatar Aug 09 '22 01:08 jichenjc

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 07 '22 02:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 07 '22 02:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jan 06 '23 03:01 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 06 '23 03:01 k8s-ci-robot