cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[All][CI] Tracking switch to prow if possible
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug /kind feature
What happened:
see #1610 for more background
https://kubernetes.slack.com/archives/C0LSA3T7C/p1620276641087400?thread_ts=1620081921.083000&cid=C0LSA3T7C
Job configuration: https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/c[…]rovider-openstack/cluster-api-provider-openstack-periodics.yaml
Notes:
preset-service-account: "true" injects the GCP account
Script to start the tests in the CAPO repo: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/scripts/ci-e2e.sh
Notes:
You have to retrieve/reserve an account via Boskos
The CAPO specific parts are the ensure scripts and l.83-l.86 everything else should be the same in your case
There's a bit Makfile magic between the make call in ci-e2e.sh and devstack-on-gce-project-install.sh. But the shell script is definitely what you're looking for: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/hack/ci/devstack-on-gce-project-install.sh
Notes:
you're case is probably FLAVOR=default (which installs devstack on boot, FLAVOR=preinstalled uses an OS image were devstack is already installed)
on a high level the script does:
Create GCP: networks, subnets, firewall rules, router, nat
Create an image enabled for nested virt based on --image-project ubuntu-os-cloud --image-family ubuntu-2004-lts
Create a server (devstack is installed via cloud-init)
Wait until the devstack is reachable and then generate a clouds.yaml
Cloud init file can be found here: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/hack/ci/devstack-default-cloud-init.yaml.tpl
Notes:
More or less just the regular devstack installation, with a bit iptables magic at the end for egress traffic
(edited)
sbueringer 3 months ago
Just let me know if you have further questions or if I can help.
sbueringer 3 months ago
Huge difference for me compared to OpenLab was:
I finally know now how our tests work as it's pretty straightforward (compared to all the stuff that OpenLab does before)
I can just execute the tests via my private GCP account and debug in every way I like. In OpenLab the feedback loop was pretty horrible.
I think it's still great that OpenLab is there and helps the whole community to test against OpenStack. Unfortunately, it just wasn't a great fit for CAPO and the alternative is pretty great.P.S. I also have a script for AWS right next to the one for GCP (https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/hack/ci/devstack-on-aws-project-install.sh). It re-uses the devstack installation and just does in AWS what the other script did on GCP. (edited)
https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/cluster-api-provider-openstack/cluster-api-provider-openstack-periodics.yaml#L2-L47
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment:
- openstack-cloud-controller-manager(or other related binary) version:
- OpenStack version:
- Others:
so the issue is tracking the breakdown in case #1610 has no solution including something like infra setup, CI scripts creation etc..
Being done as part of https://github.com/kubernetes/cloud-provider-openstack/pull/1632 , https://github.com/kubernetes/test-infra/pull/23328
https://github.com/kubernetes/test-infra/pull/23402
@jichenjc This script , ci-csi-cinder-e2e.sh yet to be added right? Are you working on that?
@ramineni yes, something like https://github.com/theopenlab/openlab-zuul-jobs/pull/1195/files will be replicated to our repo
https://github.com/kubernetes/test-infra/pull/23495
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I think the following issues could all fall under this topic:
- https://github.com/kubernetes/cloud-provider-openstack/issues/1540
- https://github.com/kubernetes/cloud-provider-openstack/issues/1825
- https://github.com/kubernetes/cloud-provider-openstack/issues/1707
- https://github.com/kubernetes/cloud-provider-openstack/issues/1702
I have come here because I have hit 2 problems developing CCM e2e tests for CAPO:
- My local testing is impacted by the docker.io rate limit pull the occm image
- The occm release manifests don't include a placeholder for the required pull secret to use docker.io
- The occm release images aren't built with a secure pipeline
I think that moving the whole pipeline on to existing k8s infrastructure as documented in https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/README.md could solve all of these.
this has been discussed some time ago if I remember correctly, and I don't remember the exact reason @lingxiankong mentioned seems due to time and contribution time etc,
looks like k8s-staging-cloud-provider-ibm and k8s-staging-cloud-provider-gcp are hosting images in the staging repo maybe we can follow that to give a try (but not use it until we are confident enough)
https://github.com/kubernetes/k8s.io/pull/3638
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.