kopf
kopf copied to clipboard
How to handle version upgrades of CRD
Keywords
No response
Problem
I am wondering how to handle a version upgrade of a CRD using kopf. I could not find anything in the documentation mentioning this so I thought this could be handled with 2 different handlers listening to separate versions. I tried that but couldn't get it to work so I decided to ask here. What I have tried so far is this:
I have this CRD
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: keycloakusers.keycloak.genuitysci.com
spec:
group: keycloak.genuitysci.com
# either Namespaced or Cluster
scope: Cluster
names:
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: keycloakusers
# singular name to be used as an alias on the CLI and for display
singular: keycloakuser
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: KeycloakUser
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- kcu
conversion:
strategy: None
versions:
- name: v2
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
status:
type: object
x-kubernetes-preserve-unknown-fields: true
spec:
type: object
properties:
realm:
type: string
user:
type: object
properties:
username:
type: string
firstName:
type: string
lastName:
type: string
email:
type: string
enabled:
type: boolean
emailVerified:
type: boolean
roles:
type: array
items:
type: string
- name: v1
served: true
storage: false
schema:
openAPIV3Schema:
type: object
properties:
status:
type: object
x-kubernetes-preserve-unknown-fields: true
spec:
type: object
properties:
realm:
type: string
user:
type: object
properties:
username:
type: string
firstName:
type: string
lastName:
type: string
email:
type: string
enabled:
type: boolean
#emailVerified:
# type: boolean
roles:
type: array
items:
type: string
And these 2 handlers:
@kopf.on.create('keycloak.genuitysci.com', 'v1', 'keycloakusers')
async def create_user(spec, **kwargs):
log.info('create v1 keycloak user')
@kopf.on.create('keycloak.genuitysci.com', 'v2', 'keycloakusers')
async def create_userv2(spec, **kwargs):
log.info('create v2 keycloak user')
Now when I create a resource using v1 of my CRD
apiVersion: "keycloak.genuitysci.com/v1"
kind: KeycloakUser
metadata:
name: pipeline-manager
spec:
realm: myrealm.com
user:
username: [email protected]
email: [email protected]
roles:
- my-role
Both handlers are executed as you can see from the logs created:
2022-09-08 13:26:24,727 - keycloakoperator INFO : create v1 keycloak user
2022-09-08 13:26:24,729 - kopf.objects INFO : Handler 'create_user' succeeded.
2022-09-08 13:26:24,730 - kopf.objects INFO : Creation is processed: 1 succeeded; 0 failed.
2022-09-08 13:26:24,738 - keycloakoperator INFO : create v2 keycloak user
2022-09-08 13:26:24,739 - kopf.objects INFO : Handler 'create_userv2' succeeded.
2022-09-08 13:26:24,739 - kopf.objects INFO : Creation is processed: 1 succeeded; 0 failed.
The same thing happens when I create a resource using v2
apiVersion: "keycloak.genuitysci.com/v2"
kind: KeycloakUser
metadata:
name: deployer
spec:
realm: myrealm.com
user:
username: [email protected]
email: [email protected]
roles:
- my-role
As you can see from these log lines which look exactly the same:
2022-09-08 13:27:29,585 - keycloakoperator INFO : create v1 keycloak user
2022-09-08 13:27:29,586 - kopf.objects INFO : Handler 'create_user' succeeded.
2022-09-08 13:27:29,587 - kopf.objects INFO : Creation is processed: 1 succeeded; 0 failed.
2022-09-08 13:27:29,591 - keycloakoperator INFO : create v2 keycloak user
2022-09-08 13:27:29,591 - kopf.objects INFO : Handler 'create_userv2' succeeded.
2022-09-08 13:27:29,592 - kopf.objects INFO : Creation is processed: 1 succeeded; 0 failed.
Am I missing something here? Shouldn't the handlers only be listening to a specific version of each resource? If I can't use this method of handling version upgrades of CRDs, how then is it best to handle it using kopf?
IMHO, kopf should provide the ability to specify conversion webhooks in the same way that we can specify admission webhooks.
How easy would this be to implement?
IMHO, kopf should provide the ability to specify conversion webhooks in the same way that we can specify admission webhooks. How easy would this be to implement?
Having all the http-listening infrastructure in place, it will be easy to add. The problem is that I did not (and still do not) understand the process of the CRD upgrade; specifically, where and how the conversion happens.
I would appreciate a link to an explanatory article on how this is done (language-agnostic), preferably with examples. The official documentation looks cryptic to me (looked; maybe it is better nowadays — I will take a look again).
Am I missing something here? Shouldn't the handlers only be listening to a specific version of each resource?
Hm. This was the idea that I first wanted to suggest, and that I kept in mind when developing those group-version-plural filters. It seems, it does not work that way.
If I can't use this method of handling version upgrades of CRDs, how then is it best to handle it using kopf?
At least one simple way would be to have a single handler, accept the resource
kwarg, and then check resource.version
with regular ifs.
@kopf.on.create('keycloak.genuitysci.com', 'keycloakusers')
def fn(resource, spec, **_):
if resource.version == 'v1':
...
if resource.version == 'v2':
...
Alternatively, it could be 2 separate handlers with the when
callback filter doing the same checks:
@kopf.on.create('keycloak.genuitysci.com', 'keycloakusers', when=lambda resource, **_: resource.version == 'v1')
def fn_v1(spec, **_):
...
@kopf.on.create('keycloak.genuitysci.com', 'keycloakusers', when=lambda resource, **_: resource.version == 'v2')
def fn_v2(spec, **_):
...
But that does not look good and nice and clean. If Kubernetes makes all versions equal for the API, there should be a way with conversion webhooks as mentioned above.
Hm. This was the idea that I first wanted to suggest, and that I kept in mind when developing those group-version-plural filters. It seems, it does not work that way.
So this is a bug then? Do you think you can get around to looking into that? I would also be willing to contribute with a PR but I have very little insight into the inner workings of kopf and wouldn't know where to dip my toes in.
I guess I can work around this now by checking the version explicitly like you mention in your first example but I really think it should be possible to create a handler for a specific version of a resource. Apart from this annoying issue kopf is still a really great framework and I really appreciate all your work.
I don't think it works like this at all in Kubernetes right? Each CRD has only one version that is designated as the "storage version", and I think this is the version on which events fire. Which is why the conversion webhook is important.
Am I missing something here? Shouldn't the handlers only be listening to a specific version of each resource?
Hm. This was the idea that I first wanted to suggest, and that I kept in mind when developing those group-version-plural filters. It seems, it does not work that way.
If I can't use this method of handling version upgrades of CRDs, how then is it best to handle it using kopf?
At least one simple way would be to have a single handler, accept the kwarg, and then check with regular ifs.
resource``resource.version
@kopf.on.create('keycloak.genuitysci.com', 'keycloakusers') def fn(resource, spec, **_): if resource.version == 'v1': ... if resource.version == 'v2': ...
Alternatively, it could be 2 separate handlers with the callback filter doing the same checks:
when
@kopf.on.create('keycloak.genuitysci.com', 'keycloakusers', when=lambda resource, **_: resource.version == 'v1') def fn_v1(spec, **_): ... @kopf.on.create('keycloak.genuitysci.com', 'keycloakusers', when=lambda resource, **_: resource.version == 'v2') def fn_v2(spec, **_): ...
But that does not look good and nice and clean. If Kubernetes makes all versions equal for the API, there should be a way with conversion webhooks as mentioned above.
Am I missing something here? Shouldn't the handlers only be listening to a specific version of each resource?
Hm. This was the idea that I first wanted to suggest, and that I kept in mind when developing those group-version-plural filters. It seems, it does not work that way.
If I can't use this method of handling version upgrades of CRDs, how then is it best to handle it using kopf?
At least one simple way would be to have a single handler, accept the
resource
kwarg, and then checkresource.version
with regular ifs.@kopf.on.create('keycloak.genuitysci.com', 'keycloakusers') def fn(resource, spec, **_): if resource.version == 'v1': ... if resource.version == 'v2': ...
Alternatively, it could be 2 separate handlers with the
when
callback filter doing the same checks:@kopf.on.create('keycloak.genuitysci.com', 'keycloakusers', when=lambda resource, **_: resource.version == 'v1') def fn_v1(spec, **_): ... @kopf.on.create('keycloak.genuitysci.com', 'keycloakusers', when=lambda resource, **_: resource.version == 'v2') def fn_v2(spec, **_): ...
But that does not look good and nice and clean. If Kubernetes makes all versions equal for the API, there should be a way with conversion webhooks as mentioned above.
Maybe this is not a way to achieve it.
I have this CRD
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: postgresqls.postgres.radondb.io
namespace: default
spec:
group: postgres.radondb.io
names:
kind: PostgreSQL
listKind: PostgreSQLList
plural: postgresqls
singular: postgresql
shortNames:
- pg
scope: Namespaced
versions:
- name: v2
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
status:
type: object
x-kubernetes-preserve-unknown-fields: true
spec:
type: object
properties:
action:
type: string
enum:
- 'start'
- 'stop'
- name: v1
served: true
storage: false
schema:
openAPIV3Schema:
type: object
properties:
status:
type: object
x-kubernetes-preserve-unknown-fields: true
spec:
type: object
properties:
action:
type: string
enum:
- 'start'
- 'stop'
And have the following 2 handlers
v1/ postgres.py
import kopf
import logging
@kopf.on.create(
'postgres.radondb.io',
'postgresqls',
)
def cluster_create_v1(
resource: kopf.Resource,
spec: kopf.Spec,
logger: logging.Logger,
**_kwargs,
):
logger.error(f"create_cluster_v1 with resource.version={resource.version}")
v2/ postgres.py
import kopf
import logging
@kopf.on.create(
'postgres.radondb.io',
'postgresqls',
)
def cluster_create_v2(
resource: kopf.Resource,
spec: kopf.Spec,
logger: logging.Logger,
**_kwargs,
):
logger.error(f"create_cluster_v2 with resource.version={resource.version}")
With the following directory structure:
$ tree .
.
├── pg-v1.yaml
├── pg-v2.yaml
├── pg.crd
├── v1
│ ├── __pycache__
│ │ └── postgres.cpython-38.pyc
│ └── postgres.py
└── v2
├── __pycache__
│ └── postgres.cpython-38.pyc
└── postgres.py
4 directories, 7 files
Create crd with v1 version
apiVersion: postgres.radondb.io/v1
kind: PostgreSQL
metadata:
name: pg
#namespace: radondb-postgres-operator
spec:
action: start
logs
# kopf run -A v2/postgres.py v1/postgres.py
[2023-05-04 16:54:58,903] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2023-05-04 16:54:58,908] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2023-05-04 16:54:58,908] kopf._core.engines.a [INFO ] Initial authentication has finished.
[2023-05-04 16:55:15,716] kopf.objects [ERROR ] [default/pg] create_cluster_v2 with resource.version=v2
[2023-05-04 16:55:15,717] kopf.objects [INFO ] [default/pg] Handler 'cluster_create_v2' succeeded.
[2023-05-04 16:55:15,830] kopf.objects [ERROR ] [default/pg] create_cluster_v1 with resource.version=v2
[2023-05-04 16:55:15,831] kopf.objects [INFO ] [default/pg] Handler 'cluster_create_v1' succeeded.
[2023-05-04 16:55:15,831] kopf.objects [INFO ] [default/pg] Creation is processed: 2 succeeded; 0 failed.
It seems that the v1 version is converted to the v2 version and passed into the handler (it looks like the conversion is caused by the storage field of the kubernetes crd).
How can I implement the handler to listen to multiple versions. Or is there any progress on CRD Upgrade.
IMHO, kopf should provide the ability to specify conversion webhooks in the same way that we can specify admission webhooks. How easy would this be to implement?
Having all the http-listening infrastructure in place, it will be easy to add. The problem is that I did not (and still do not) understand the process of the CRD upgrade; specifically, where and how the conversion happens.
I would appreciate a link to an explanatory article on how this is done (language-agnostic), preferably with examples. The official documentation looks cryptic to me (looked; maybe it is better nowadays — I will take a look again).
I read the k8s webhook related documents and got the following information (sorry if there are mistakes). Hope it can help you.
Here are the CRD upgrade steps
1、Defines a handler when an upgrade is triggered:The spec.conversion in the CRD defines procedures that handle the upgrade logic, such as defining an http request.(https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#url)
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
...
spec:
...
conversion:
strategy: Webhook
webhook:
clientConfig:
url: "https://my-webhook.example.com:9443/my-webhook-path"
...
2、Define a responder for https://my-webhook.example.com:9443/my-webhook-path
requests:
Here is an example converter for stable.example.com/v1 and stable.example.com/v2:https://github.com/kubernetes/kubernetes/blob/v1.25.3/test/images/agnhost/crd-conversion-webhook/converter/example_converter.go#L29
explain:
-
stable.example.com/v1
defines the hostPort field -
stable.example.com/v2
splits hostPort into two fields, host and port, and the handler handles them compatible.
Convert v1 to v2 core program (keep only key code):
# delete cr hostPort field
delete(convertedObject.Object, "hostPort")
# split to host and port
parts := strings.Split(hostPort.(string), ":")
# set host and port to cr
convertedObject.Object["host"] = parts[0]
convertedObject.Object["port"] = parts[1]
Convert v2 to v1 core program (keep only key code):
# get host and port field
host, hasHost := convertedObject.Object["host"]
port, hasPort := convertedObject.Object["port"]
# convert to hostPort and set to cr
convertedObject.Object["hostPort"] = fmt.Sprintf("%s:%s", host, port)
# delete cr host and port field
delete(convertedObject.Object, "host")
delete(convertedObject.Object, "port")