kubespray
kubespray copied to clipboard
Enable Octavia LB on OpenStack Not applied
Environment:
-
Cloud provider or hardware configuration: OpenStack
-
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Centos Stream 8.5 -
Version of Ansible (
ansible --version
):
ansible 2.10.15 config file = /home/centos/kubespray/ansible.cfg configured module search path = ['/home/centos/kubespray/library'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Jan 19 2022, 23:28:49) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)]
-
Version of Python (
python --version
): Python 3
Kubespray version (commit) (git rev-parse --short HEAD
): 5e67ebeb
Network plugin used: Flannel
Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):
Command used to invoke ansible:
cluster.yaml , to build the cluster
Output of ansible run:
All Success except the octavia LB not applied on OpenStack. It seems the configuration has no effect !!!!
Anything else do we need to know:
@ayaseen what exactly is the expectation here ? Octavia LBs are created for kubernetes services with Type=LoadBalancer, there isn't an Octavia instance created by default when a new cluster is deployed. Do you mean that Octavia instances are not created when you create such a service? Can you share the logs of the openstack cloud controller manager ? Do you see any errors?
/cc @Xartos any chance you are familiar with this setup? The openstack instances I have access to do not have octavia support so I can't personally look into the topic.
Since this issue was created yesterday I would suspect that the issue is that the service account for the cloud controller didn't have enough permissions and thus couldn't create the LB. And if that's the case then this should be fixed with this PR.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'm also facing the same issue. Any clue will be appreciated! Thanks!