operator-lifecycle-manager
operator-lifecycle-manager copied to clipboard
Document how to upgrade OLM
Feature Request
Is your feature request related to a problem? Please describe.
I have installed an older version of OLM and wish to upgrade to a newer version. OLM doesn't provide any official upgrade steps and kubectl apply
fails due as the kubernetes.io/last-applied-configuration
annotation causes the CRD to exceed 262144 bytes.
Describe the solution you'd like I would like the OLM project to provide official steps for performing an upgrade.
Acceptance Criteria:
- The steps for upgrading OLM are documented and made available in the release notes.
- Strive for as much automation as we can, but if manual steps are required the reasoning should be called out in the PR.
FYI: Did a test with the CRD to see what kubectl replace
would do. It seems a kubectl replace (as long as you don't use --force) allows the CRs to remain, and doesn't update the last-applied-configuration.
Yes kubectl replace
uses PUT as long as you don't use --force
.
I would propose to create a script update.sh similar to install.sh, which does the following:
- check that the cluster is not OpenShift
- check that olm is already installed
- uses replace for crds.yaml
- uses apply for olm.yaml so that modifications done to olmconfig, deployments, e.g. resource requests or node selectors are not lost
- check that the new version is running
I would also propose to modify install.sh to have apply instead of create here:
kubectl create -f "${url}/olm.yaml"
We should then add the command to the release notes, e.g.:
curl -L https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.24.0/update.sh -o update.sh
chmod +x update.sh
./update.sh v0.24.0
The advantage of having a script compared to just instructing the replacement of CRDs and the application of OLM resources is that we can also build into the scripts the handling of resources that have been renamed or removed when that happens.
Regarding documentation I would add update instructions under Core Tasks. Frankly I don't know:
- why OLM install is under QuickStart and not Core Tasks
- why it is written QuickStart and not Quick Start
- why
operator-sdk olm install
is referenced in the quick start and not the install script - why
operator-sdk olm install
exists at all
Also, I think it would make sense to have separate top level menus for OLM administration vs OLM usage but I am diverging from the purpose of this issue.
@awgreene @Jamstah let me know what you think and if that sounds reasonable to you I could create a PR.
For people using GitOps replace
or server side apply (SSA) can be used for CRDs.
The containerPort protocol has been added to the release to come. For older releases the field will need to be ignored in the GitOps tool, e.g. for ArgoCD.
I think the proposed update script makes a lot of sense. I'm also not sure why we have a go version of the install, I guess its because the operator-sdk is a single binary so we can't ship the script with it very easily, and we want users to be able to get a cluster up and running easily.
I would be tempted to have a single install/upgrade script that can be run idempotently on the cluster instead of splitting it into two.
From the discussion I had with @kevinrizza @awgreene @joelanford my understanding is that you don't want to invest in this and you are against an external contribution for it. In this respect I guess that this issue can be closed. Are you seeing that differently?
The update script makes a lot of sense, as also would a helm version of the installation. The reason being ArgoCD can be used to lifecycle OLM (and actually ArgoCD can lifecycle itself too).
I modified the installation script according to @fgiloux's comment to work for upgrading. You can find it here.