kopf icon indicating copy to clipboard operation
kopf copied to clipboard

kopf.event not appearing in 'kubectl describe' for cluster scoped objects

Open ErikEngerd opened this issue 2 years ago • 11 comments

Keywords

kopf.event

Problem

I have created an operator with a kopf.on.create hook. At the end of creating the resouce in this hook, I am creating an event. No matter what I do, kopf.event, kopf.info, kopf.warn don't give any output. I have also used a kopf.on.startup hook to set the logging level to DEBUG but no matterwhat I do, there is no output in the kubectl describe for my operator.

Example

@kopf.on.create('wamblee.org', 'v1', 'simplevolumes')
def create_fn(body, spec, **kwargs):
   res = simplevolumes_on_create(body, spec, **kwargs)
   kopf.warn(body, reason="Created", message="hello")
   return res

The simplevolume_on_create call is in a package that I am importing.

ErikEngerd avatar Aug 23 '22 19:08 ErikEngerd

The problem even occurs when I remove all code. So I now have

@kopf.on.create('wamblee.org', 'v1', 'simplevolumes')
def create_fn(body, spec, **kwargs):
   kopf.event(body, type="Normal", reason="Created", message="hello")

Above, I am using kopf.event to eliminate the additional filtering on log level in kopf.warn. Still no output. Could there be some cluster setting that would prevent events from getting added? I checked other objects and the standard kubernetes objects such as Pod and Ingress have their log entries.

Environment:

  • kubernetes 1.23.7 with containerd
  • kopf version 1.35.6

Any help would be appreciated.

ErikEngerd avatar Aug 23 '22 19:08 ErikEngerd

This is the CRD. It is a cluster CRD.

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: simplevolumes.wamblee.org
spec:
  scope: Cluster
  group: wamblee.org
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              required:
                - host
                - path
                - type
              properties:
                type:
                  type: string
                storage:
                  type: string
                accessModes:
                  type: array
                  items:
                    type: string
                host:
                  type: string
                path:
                  type: string
                persistentVolumeClaims:
                  type: array
                  items: 
                    type: object
                    required:
                      - name
                      - namespace
                    properties: 
                      name:
                        type: string
                      namespace:
                        type: string 
      additionalPrinterColumns:
        - name: Type
          type: string
          priority: 0
          jsonPath: .spec.type
          description: The type of storage used
        - name: Host
          type: string
          priority: 0
          jsonPath: .spec.host
          description: The node(s) where storage resides
        - name: Path
          type: string
          priority: 0
          jsonPath: .spec.path
          description: The storage path. 
      
  names:
    kind: SimpleVolume
    plural: simplevolumes
    singular: 
    shortNames:
      - vol
      - vols

ErikEngerd avatar Aug 23 '22 20:08 ErikEngerd

When I modify the CRD to scope 'Namespaced' then kopf.event() works. However, still cannot get any output using the logging framework.

Is this a known limitation that generating events for Cluster scoped resources does not work? Also, is there something special to configure for getting 'logging' to work. I create a startup handler where I am setting the logging level but that does not help

@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
    print("STARTUP " + str(settings.posting.enabled))
    logging.getLogger("kopf.objects").setLevel(logging.INFO)
    settings.posting.level = logging.INFO
    settings.watching.connect_timeout = 1 * 60
    settings.watching.server_timeout = 10 * 60

ErikEngerd avatar Aug 29 '22 19:08 ErikEngerd

I found out what the second problem is. Apparently, one has to use the logger argument to the event handler to log events. This is a bit unfortunate since I am delegating to other code in a package and I now have to pass around the logger all over the place instead of using 'logging.info(...)'. Right now I am working around this using a thread local variable.

Still no luck in getting logging for Cluster scoped events to work.

ErikEngerd avatar Aug 29 '22 19:08 ErikEngerd

Hi. Sorry for the delay. It is great that you have found a solution!

Is this a known limitation that generating events for Cluster scoped resources does not work?

I am not sure if it is a limitation of Kopf or of Kubernetes itself. Kubernetes Events are namespaced, as far as I remember. So you cannot create an event for a cluster-scoped resource. Unless there is a way how to do this (e.g. without Kopf, purely with the API or a client library). If so, this way can be added to Kopf.

The conversion of a resource body to an Event's reference happens here: https://github.com/nolar/kopf/blob/7b4569024b5a9382195bd0ba76ffb164c2c41bbd/kopf/_cogs/structs/bodies.py#L228-L244

nolar avatar Aug 29 '22 20:08 nolar

It seems that for cluster scoped events, the default namespace should be used. See here.

I can try tomorrow to see if I can create an event for a cluster scoped object in this way using the kubernetes API.

ErikEngerd avatar Aug 29 '22 22:08 ErikEngerd

I managed to generate an event for a cluster scoped object (Node) using the kubernetes python library as follows:

import kubernetes.client
from kubernetes import config, client
from attrdict import AttrDict
import datetime
import pytz

config.load_kube_config()

api = client.CoreV1Api()

nodes = api.list_node()

node = nodes.items[0]
print("Node: " + node.metadata.name)
print("Node namespace: " + str(node.metadata.namespace))

body = new_event = kubernetes.client.CoreV1Event(
                count=1,
                first_timestamp=datetime.datetime.now(pytz.utc),
                involved_object=kubernetes.client.V1ObjectReference(
                    kind="Node",
                    name=node.metadata.name,
                    namespace=node.metadata.namespace,
                    uid=node.metadata.uid,
                ),
                last_timestamp=datetime.datetime.now(pytz.utc),
                message="Something terrible happened",
                metadata=kubernetes.client.V1ObjectMeta(
                    name="The end of the cluster is near: " +
                    str(datetime.datetime.now())
                ),
                reason="SometingHappened",
                source=kubernetes.client.V1EventSource(
                    component="testing",
                ),
                type="Warning",
            )
res = api.create_namespaced_event("default", body) 

When using the default namespace, the event was successfully generated:

> python genevent.py 
Node: cobra
Node namespace: None
> kubectl describe node cobra
....[SNIP]...
Events:
  Type     Reason            Age   From     Message
  ----     ------            ----  ----     -------
  Warning  SometingHappened  16s   testing  Something terrible happened

This works when using the "default" namespace but it does not work when using "". So apparently, one has to specify "default" for cluster scoped objects.

This means that most likely in the above code:

namespace=body.get('metadata', {}).get('namespace')

could be replaced by

namespace=body.get('metadata', {}).get('namespace', 'default')

That is, assuming of course that in the kopf framework, get('namespace') would return None.

ErikEngerd avatar Aug 30 '22 18:08 ErikEngerd

Some weird things. After testing my changes I still did not see events. Then I saw that there were events from before I made my changes. 'kubectl get events -A' showed events that were generated earlier. However, the events were not shown in the 'kubectl describe' output.

Now, taking one of these events and editing it to remove the namespace from the involved Object to change

involvedObject:
  apiVersion: wamblee.org/v1
  kind: DatabaseServer
  name: mysql-local
  namespace: default
  uid: 13f4ba1b-8671-4574-8df1-2dc5c3a23542

into

involvedObject:
  apiVersion: wamblee.org/v1
  kind: DatabaseServer
  name: mysql-local
  uid: 13f4ba1b-8671-4574-8df1-2dc5c3a23542

I started seeing the events in the output of 'kubectl describe'

So the namespace of the 'involvedObject' is absent (or None in python), but the namespace of the event is 'default'.

ErikEngerd avatar Aug 30 '22 19:08 ErikEngerd

I think I have fixed the issue. I have made following fix based on the main HEAD

> git diff
diff --git a/kopf/_cogs/clients/events.py b/kopf/_cogs/clients/events.py
index 25c54df..c502a58 100644
--- a/kopf/_cogs/clients/events.py
+++ b/kopf/_cogs/clients/events.py
@@ -40,7 +40,7 @@ async def post_event(
     namespace_name: str = ref.get('namespace') or (await api.get_default_namespace()) or 'default'
     namespace = references.NamespaceName(namespace_name)
     full_ref: bodies.ObjectReference = copy.copy(ref)
-    full_ref['namespace'] = namespace
+    full_ref['namespace'] = ref.get('namespace')
 
     # Prevent a common case of event posting errors but shortening the message.
     if len(message) > MAX_MESSAGE_LENGTH:

This change entails using 'namespace' None for the involved object, but still using 'default' for posting. This is because there is already logic in events.py for determining the namespace for cluster-scoped objects for the Event's namespace.

namespace_name: str = ref.get('namespace') or (await api.get_default_namespace()) or 'default'

What do you think of this change? And how to proceed now?

ErikEngerd avatar Aug 30 '22 19:08 ErikEngerd

Any information on this issue? I could write a pull request but I think test cases should also be added and I don't know how to write those.

ErikEngerd avatar Sep 03 '22 19:09 ErikEngerd

Hi, I'm having the same issue and the solution proposed by @ErikEngerd seems fine; the namespace on involvedObject field must not be specified for cluster wide objects for Kubernetes to associate it to the resource.

ezeriver94 avatar Oct 13 '23 12:10 ezeriver94