kopf icon indicating copy to clipboard operation
kopf copied to clipboard

Handler rules not register all resources

Open mysiki opened this issue 1 month ago • 7 comments

Long story short

Hello,

I use kopf for mutating/validating webhook on crossplane objects. Crossplane release all objects in cluster wide and namespaces version.

I let kopf create the k8s webhook.

Look like the handler miss some crds in the webhook rule when the name is the same. (I test is with multiple filter, even with "*" or "kopf.EVERYTHING".

As I have 101*2 resources, this is just an example :

kubectl api-resources --categories=managed | grep  "vpcs "
NAME                                        SHORTNAMES   APIVERSION                     NAMESPACED   KIND
defaultvpcs                                              ec2.aws.m.upbound.io/v1beta1   true         DefaultVPC
vpcs                                                     ec2.aws.m.upbound.io/v1beta1   true         VPC
defaultvpcs                                              ec2.aws.upbound.io/v1beta1     false        DefaultVPC
vpcs                                                     ec2.aws.upbound.io/v1beta1     false        VPC

And the webhook rule only contain

....
  - apiGroups:
    - ec2.aws.m.upbound.io
    apiVersions:
    - v1beta1
    operations:
    - UPDATE
    resources:
    - vpcs
  - apiGroups:
    - ec2.aws.m.upbound.io
    apiVersions:
    - v1beta1
    operations:
    - UPDATE
    resources:
    - defaultvpcs
    scope: '*'
...

The only "ec2.aws.upbound.io" present resources are the one with a apiVersions different if "ec2.aws.m.upbound.io".

Simple check by counting :

 kubectl api-resources --categories=managed | grep  ec2.aws.upbound.io | wc -l
101

kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io externalname.kopf.dev -o json | jq '[.webhooks[].rules[].apiGroups[] | select(. == "ec2.aws.upbound.io")] | length'
14

And for ".m"

kubectl api-resources --categories=managed | grep  ec2.aws.m.upbound.io | wc -l
101

kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io externalname.kopf.dev -o json | jq '[.webhooks[].rules[].apiGroups[] | select(. == "ec2.aws.m.upbound.io")] | length'
101

Kopf version

No response

Kubernetes version

No response

Python version

No response

Code


Logs


Additional information

No response

mysiki avatar Nov 04 '25 11:11 mysiki

Thanks for reporting. Before I dive in, is there any manual on how to quickly deploy a cluster like this from scratch?

Also, if you make an on-event handler with the same filter, and run in the verbose mode (kopf run -v), do you see those crds also skipped in the cluster discovery process?

My main suspicion now (before I even could touch it) would be that those versions are filtered out as inactive somehow — I will take a closer look once I get to a normal computer.

nolar avatar Nov 04 '25 12:11 nolar

Hello, thanks for your quick reply. I'm also not on my laptop for now but I test it on local cluster with k3s and built-in crossplane installation.

FYI I can deploy object base on this crds, I should be able to test by creating custom crds to reproduce the bug without installing crossplane.

I will try on event asap.

Also if I setup the handler to explicitly read this crd, it works.

mysiki avatar Nov 04 '25 13:11 mysiki

Tested with on_event

@kopf.on.event(category='managed')
def my_handler(event, logger, **_):
    logger.info(f"Event: {event}")

The only vpcs that I have is : [DEBUG ] Starting the watch-stream for vpcs.v1beta1.ec2.aws.m.upbound.io cluster-wide. This resource is namespaces (and vpcs.v1beta1.ec2.aws.upbound.io - without .m. - is cluster wide), don't know if the cluster-wide in the log is relevant about that.


More test, I try create 2 crds (ns and cluster) and it work.

So finally I scope direclty my crd and I have this message now kopf._core.reactor.o [WARNING ] Unresolved resources cannot be served (try creating their CRDs): Selector(group='ec2.aws.upbound.io', any_name='vpcs')

but : kubectl get crds vpcs.ec2.aws.upbound.io -o wide NAME CREATED AT vpcs.ec2.aws.upbound.io 2025-11-02T09:24:29Z

FYI, my pod run with a service account "cluster-admin".

I don't really know where to search now

mysiki avatar Nov 04 '25 15:11 mysiki

Nota : if I focus the event only one vpcs.m it work ... I compare crds for objects

Starting the watch-stream for vpcs.v1beta1.ec2.aws.m.upbound.io cluster-wide.

which work and objects which not work, and didn't see any reason

Work :

Image

NOT work

Image

mysiki avatar Nov 04 '25 16:11 mysiki

Progress a little bit, discover that in "ec2.aws.upbound.io" only v1beta2 are watch. If I create an explicit even on ''vpcs.v1beta1.ec2.aws.upbound.io' it work, but if I just provide ''vpcs.ec2.aws.upbound.io', nothing is watch.

mysiki avatar Nov 04 '25 16:11 mysiki

Sorry for all this noose :D

Last update, in fact, some crds are v1beta1 and some have both (v1beta1 and v1beta2).

When I use : @kopf.on.event('ec2.aws.upbound.io/v1beta1', kopf.EVERYTHING)

I have all the v1beta1.

But when I use @kopf.on.event('ec2.aws.upbound.io', kopf.EVERYTHING)

I only have the v1beta2

I don't know how to solve that and deal with, my goal is to have all 'ec2.aws.upbound.io', v1beta1 and v1beta2. Did you have any idea / solution ?

(more of that, crossplane have like 30 potential api (ec2.aws, s3.aws, rds.aws ....), so I would like to just use Categories which is present in all crds but facing this v1beta1/2 pbl)

mysiki avatar Nov 04 '25 21:11 mysiki

Hello @nolar , I have now a simple test setup and I can confirm that : If in the same API group, one crd exist with higher version the other crd (with lower versions) are not taken by kopf selector.

Reproduce (on any vanilla cluster) :

Deploy 2 tests crds with :

  • api group : test.com
  • category : test
  • one (rssimple) with only v1beta1 version
  • second (rsmultis) with with v1beta1 and v1beta2 version

Create kopf event with selector on category='test'

Log result - only the rsmultis/v1beta2 is taken.

I would expect to have all (rssimple/v1beta1 rsmultis/v1beta1 and rsmultis/v1beta2)

To reproduce :

Crds :

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: rssimples.test.com
spec:
  group: test.com
  names:
    kind: Rssimple
    plural: rssimples
    singular: rssimple
    categories:
    - test
  scope: Namespaced
  versions:
    - name: v1beta1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            apiVersion:
              type: string
            kind:
              type: string
            metadata:
              type: object
            spec:
              type: object
              properties:
                my-own-property:
                  type: string
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: rsmultis.test.com
spec:
  group: test.com
  names:
    kind: Rsmulti
    plural: rsmultis
    singular: rsmulti
    categories:
    - test
  scope: Namespaced
  versions:
    - name: v1beta1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            apiVersion:
              type: string
            kind:
              type: string
            metadata:
              type: object
            spec:
              type: object
              properties:
                my-own-property:
                  type: string
    - name: v1beta2
      served: true
      storage: false
      schema:
        openAPIV3Schema:
          type: object
          properties:
            apiVersion:
              type: string
            kind:
              type: string
            metadata:
              type: object
            spec:
              type: object
              properties:
                my-own-property:
                  type: string

Kopf event

@kopf.on.event(category='test')
def my_handler(event, logger, **_):
    logger.info(f"Event: {event}")

Log result :

[2025-11-11 22:07:43,816[] asyncio              [DEBUG   ] Using selector: EpollSelector
[2025-11-11 22:07:43,818[] kopf._core.reactor.r [DEBUG   ] Starting Kopf 1.38.0.
[2025-11-11 22:07:43,818[] kopf.activities.star [DEBUG   ] Activity 'configure' is invoked.
[2025-11-11 22:07:43,818[] kopf.activities.star [INFO    ] Activity 'configure' succeeded.
[2025-11-11 22:07:43,819[] kopf._core.engines.a [INFO    ] Initial authentication has been initiated.
[2025-11-11 22:07:43,819[] kopf.activities.auth [DEBUG   ] Activity 'login_with_service_account' is invoked.
[2025-11-11 22:07:43,819[] kopf._core.engines.p [DEBUG   ] Serving health status at http://0.0.0.0:8080/healthz
[2025-11-11 22:07:43,820[] kopf.activities.auth [INFO    ] Activity 'login_with_service_account' succeeded.
[2025-11-11 22:07:43,820[] kopf._core.engines.a [INFO    ] Initial authentication has finished.
[2025-11-11 22:07:43,945[] kopf._cogs.clients.w [DEBUG   ] Starting the watch-stream for customresourcedefinitions.v1.apiextensions.k8s.io cluster-wide.
[2025-11-11 22:07:43,946[] kopf._kits.webhooks  [DEBUG   ] Using a provided certificate for HTTPS.
[2025-11-11 22:07:43,947[] kopf._cogs.clients.w [DEBUG   ] Starting the watch-stream for rsmultis.v1beta2.test.com cluster-wide.
[2025-11-11 22:07:43,948[] kopf._kits.webhooks  [DEBUG   ] Listening for webhooks at https://*
[2025-11-11 22:07:43,948[] kopf._kits.webhooks  [DEBUG   ] Accessing the webhooks at https://webhook.localhost
[2025-11-11 22:07:43,948[] kopf._core.engines.a [INFO    ] Reconfiguring the validating webhook externalname.kopf.dev.
[2025-11-11 22:07:43,951[] kopf._core.engines.a [INFO    ] Reconfiguring the mutating webhook externalname.kopf.dev.

mysiki avatar Nov 11 '25 22:11 mysiki