catapult
catapult copied to clipboard
SCF and KubeCF CI implementation
GCloud 318.0.0 or greater is [required](https://issuetracker.google.com/issues/170125513) to work with Python 3.9, which is the default version on GitHub Actions. We currently have workarounds to pin to Python 3.8, but it...
quarks-operator deletes the ns that is watching on its own, don't remove it manually. Closes https://github.com/SUSE/catapult/issues/324
``` make -s -C modules/kubecf clean /tmp/build/1ef78b77/catapult/buildci-aks-0ed4627d4a6934ef /tmp/build/1ef78b77/catapult/modules/kubecf [./clean.sh] [backend:aks] [cluster:ci-aks-0ed4627d4a6934ef] Loading namespace "scf" deleted podsecuritypolicy.policy "susecf-scf-default" deleted Error: warning: Hook pre-delete quarks/templates/hooks.yaml failed: namespaces "scf" not found make[1]: ***...
This PR enables testing airgapped clusters on GKE. It contains targets for setting up a local-registry, pushing the images from imagelist.txt to the registry, and airgapping the cluster via k8s...
Provide a new implementation for a dependency system. Ideally, it would look similar or if not the same to https://github.com/cloudfoundry-incubator/kubecf/tree/master/scripts/tools. Some constraints that have popped up: - Needs to be...
This implements all relevant improvements from: https://confluence.suse.com/pages/viewpage.action?pageId=565905160 The only relevant one for Openstack is raising the pid_limit (max number of processes inside 1 container) from 1024 to 4096 (default number...
We create a resource group in AKS on deployment; for mirroring, we should also remove it when cleaning up.
To verify upgrades are working as intended: when I run `make kubecf-upgrade`, 1. display deployed version of kubecf (`helm list --all-namespaces` would suffice) 2. create pre-upgrade org, space and user...
After you run `make stratos-clean`, the dns entries created upon deployment remain configured. This causes problems on deployments afterwards if a test or user tries to access stratos ui via...
Importing a kubeconfig should _always only depend on the kubeconfig, and general public cloud credentials_. It should not depend on specifics of the cluster to be imported (GKE_CLUSTER_NAME or GKE_CLUSTER_ZONE)....