operator-lifecycle-manager
operator-lifecycle-manager copied to clipboard
Upgrade OLM between releases
Feature Request
Is your feature request related to a problem? Please describe.
OLM itself is installed by applying the manifests in the latest release to a cluster via kubectl apply
. However, for a long running cluster that expects to pull in newer versions of OLM over time, there is no clear path between releases. How do you install a newer version of OLM without first uninstalling it? In the trivial case, you could just reapply the new manifests and pull in their updates, but in some cases (ex a manifest that has been removed
Describe the solution you'd like An explicit upgrade script (similar to install.sh) but that compares the current version on cluster to the new version being applied that is aware of how to upgrade from one version to the next. Potentially, if there are issues upgrading between those versions, an error message stating why along with next steps beyond that.
With the advent of operator-framework/rukpak -- and it's counterpart, deppy -- I think we should package OLM as a "plain bundle", and use rukpak to handle its upgrades. @kevinrizza wdyt?
cc @timflannagan @perdasilva
@njhale The only problem with packaging OLM as a plain bundle is that rukpak's plain provisioner has the current limitation where Bundles that contain both the CRD definitions and instances of those CRDs as CRs results in an ordering problem due to our usage of helm under-the-hood. See https://github.com/operator-framework/rukpak/issues/131 for more information.
That said, it's definitely possible to package OLM as two different Bundle resources:
- CRD definitions
- Application + custom resource instances
With the introduction of embedded Bundles, we'd need to create two BundleInstances to instantiate OLM, which seems like overkill right now?
I can upgrade OLM
# You should run this command the first time without --force-conflicts to check if you have any conflicts.
# If you have some conflicts, it is better to analyse them.
kubectl apply -f ./crds.yaml --server-side=true --force-conflicts
# check that all CRDs were installed properly.
kubectl wait --for=condition=Established -f ./crds.yaml
# OLM operators will be updated here and restarted.
kubectl apply -f ./olm.yaml
What potential issues can I have in this case?
Do you have at least a rough timeline as to when this is planned to be implemented?
@hedgss Since this is manually applying manifests this can cause issues if we rename or restructure the manifest files that are part of an OLM release. But to be honest we haven't done that so far as I can remember.
The other scenario is that within olm.yaml
we do change the structure in which OLM is deployed, i.e. when adding new controllers or removing older ones. Again, not a likely scenario at the moment and not something we would do without a major version bump of OLM. But if you want to be extra sure, you can essentially kubectl apply delete -f ./olm.yaml
with the previous olm.yaml
and kubectl apply...
the new one for a clean removal and install of the new version. The CRDs should never be deleted and should always be safe to be updated by applying the crd.yaml
.