cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
How to use volume v3?
/kind bug
What steps did you take and what happened: [A clear and concise description of what the bug is.]
I set volume_api_version: 3 in clouds.yaml to use the Cinder v3 API.
However, when CAPO attempts to access the volume service (e.g., during machine provisioning), it appears to use the v1 API endpoint (/v1/{tenant_id}/volumes) instead.
- error log
Expected HTTP response code [200 204 300] when accessing [GET http://{domain}/volume/v1/{tenant_id}/volumes/detail?name=capi-quickstart-control-plane-b2djn-root&project_id={tenant_id}], but got 404 instead
What did you expect to happen:
I expected CAPO (via Gophercloud) to use the v3 Cinder API (/v3/{project_id}) when volume_api_version: 3 is configured.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster API Provider OpenStack version (Or
git rev-parse HEADif manually built): v0.12.3 - Cluster-API version: v1.10.2
- OpenStack version: 3.14.2
- Minikube/KIND version: kind v0.28.0 go1.24.2 darwin/arm64
- Kubernetes version (use
kubectl version): v1.32.0 - OS (e.g. from
/etc/os-release): macos
@hyeyoung-leee how is your cloud deployed? what does your service catalog look like?
@mnaser
The volume service catalog is shown below.
Only the endpoints for the KR1 region with the public interface are listed.
{
"token": {
"methods": [
"password"
],
"catalog": [
{
"endpoints": [
{
"url": "http://{domain}/volume/v1/{tenant_id}",
"region": "KR1",
"id": "fc01b67b-d32a-41ba-b2e8-45d5c6012a1a",
"region_id": "KR1",
"interface": "public"
}
],
"type": "volume",
"name": "cinder"
},
{
"endpoints": [
{
"url": "http://{domain}/volume/v2/{tenant_id}",
"region": "KR1",
"id": "e6347856-2a55-4341-abb1-e21f35ee7332",
"region_id": "KR1",
"interface": "public"
}
],
"type": "volumev2",
"name": "cinderv2"
},
{
"endpoints": [
{
"url": "http://{domain}/volume/v3/{tenant_id}",
"region": "KR1",
"id": "8ea6d4d3-9062-11eb-a22a-005056ac577a",
"region_id": "KR1",
"interface": "public"
}
],
"type": "volumev3",
"name": "cinderv3"
}
]
}
}
Same issue happend for me, capo v0.12.3 use v1 to access the volume, endded up with 404 not found. But on another openstack, issue is not seen.
I guess the culprit is: https://github.com/gophercloud/gophercloud/pull/3435
Indeed, this needs a new version of gophercloud that includes the fix from https://github.com/gophercloud/gophercloud/pull/3435.
I've compiled with the new gophercloud 2.8.0 some days ago and is working without any nginx workaround.
Excellent, thanks a lot @djcenox for confirming the gophercloud bump to v2.8.0 did the trick. We can now close this issue. /close
@mandre: Closing this issue.
In response to this:
Excellent, thanks a lot @djcenox for confirming the gophercloud bump to v2.8.0 did the trick. We can now close this issue. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.