Openstack Loadbalancer
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
Client version: 126.3 (git-v1.26.3)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: v1.27.2
3. What cloud provider are you using? Openstack
4. What commands did you run? What is the simplest way to reproduce this issue? kops create cluster --cloud openstack --name cluster.k8s.local --state ${KOPS_STATE_STORE} --zones eu-ch2-01 --image Standard_Debian_10_latest --v 100 --network-cidr 10.1.0.0/16
5. What happened after the commands executed? Cluster creation is startin till following error apears: I0614 07:51:34.453391 57650 context.go:320] retrying after error GetFloatingIP: fetching floating IP () failed: Resource not found: GET [https://vpc.eu-ch2.sc.otc.t-systems.com/v2.0/floatingips/], error message: {"error_msg":"The API does not exist or has not been published in the environment","error_code":"APIGW.0101","request_id":"d47177cc05efc3d105f9a10bff0312b3"}
6. What did you expect to happen? Creating a cluster
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know? Solution: The issue is caused by the format of the API Call. kOps is calling the URL with a backslash at the end and this is causing an error with openstack. The API call should be without backslash at the end:
Wrong: https://vpc.eu-ch2.sc.otc.t-systems.com/v2.0/floatingips/
Right: https://vpc.eu-ch2.sc.otc.t-systems.com/v2.0/floatingips
CC @zetaab
kOps is using 100% gophercloud library to do any queries against OpenStack APIs. All queries towards openstack floatingip apis is available in https://github.com/kubernetes/kops/blob/master/upup/pkg/fi/cloudup/openstack/floatingip.go
so in short: we cannot fix this issue in kOps. The correct place for this bug report is gophercloud https://github.com/gophercloud/gophercloud
I opened the issue in the gophercloud repo.
I cannot reproduce this with current master. I have added debug to Openstack library that it will print ALL raw http request calls and the result is.
https://gist.github.com/zetaab/d6c609c8373772357e9975831e8ca21d
doRequest: method=GET url=https://openstack.corp.com:13696/v2.0/floatingips?description=fip-api.jessesrv.k8s.local
doRequest: method=GET url=https://openstack.corp.com:13696/v2.0/floatingips?description=fip-bastions-1-jessesrv-k8s-local
it does not contain /
all http requests in gophercloud library is going through this function https://github.com/gophercloud/gophercloud/blob/master/provider_client.go#L392 and when its not adding the /, you have perhaps something else in your system that adds that /
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.