CoreV1Event fields not correctly serialized, not possible to create node events
What happened (please include outputs or screenshots):
When trying to submit an event with some optional fields not set (e.g. reportingComponent, reportingInstance, action, involvedObject.{namespace,uid,resourceVersion}), the API server returns errors like this:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Event \"ip-10-23-48-87.ec2.internal.wq2bn\" is invalid: [involvedObject.namespace: Invalid value: \"\": does not match event.namespace, reportingController: Required value, reportingController: Invalid value: \"\": name part must be non-empty, reportingController: Invalid value: \"\": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]'), reportingInstance: Required value, action: Required value]","reason":"Invalid","details":{"name":"ip-10-23-48-87.ec2.internal.wq2bn","kind":"Event","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"\": does not match event.namespace","field":"involvedObject.namespace"},{"reason":"FieldValueRequired","message":"Required value","field":"reportingController"},{"reason":"FieldValueInvalid","message":"Invalid value: \"\": name part must be non-empty","field":"reportingController"},{"reason":"FieldValueInvalid","message":"Invalid value: \"\": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')","field":"reportingController"},{"reason":"FieldValueRequired","message":"Required value","field":"reportingInstance"},{"reason":"FieldValueRequired","message":"Required value","field":"action"}]},"code":422}
As involvedObject.namespace cannot be left unset / set to None, it's not possible to create events for cluster-scoped involvedObjects (e.g. for nodes).
What you expected to happen:
It should be possible to omit these optional fields. It should be possible to create events with cluster-scoped involvedObjects.
How to reproduce it (as minimally and precisely as possible):
import platform
import datetime
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
all_nodes = all_nodes = v1.list_node().items
for node in all_nodes:
node.api_version = 'v1'
node.kind = 'Node'
obj = all_nodes[0]
v1.create_namespaced_event('default', client.CoreV1Event(
api_version='v1',
kind='Event',
metadata=client.V1ObjectMeta(
namespace='default',
generate_name=f"{obj.metadata.name}.",
),
source=client.V1EventSource(
component='my-component',
host=platform.node(),
),
reporting_component="TestComponent",
reporting_instance="TestInstance",
action="Test action",
type="Normal",
reason="Test",
message="Test message",
event_time=datetime.datetime.utcnow().isoformat(timespec='microseconds') + 'Z',
involved_object=client.V1ObjectReference(
api_version=obj.api_version,
kind=obj.kind,
namespace=obj.metadata.namespace,
name=obj.metadata.name,
resource_version=obj.metadata.resource_version,
uid=obj.metadata.uid,
)
))
Anything else we need to know?:
It looks like this might be due to something in the library serializing the None value for the namespace field as an empty string instead of just omitting the field.
Environment: AWS EKS
- Kubernetes version (
kubectl version): 1.19 - OS (e.g., MacOS 10.13.6): Linux
- Python version (
python --version): 3.9.7 - Python client version (
pip list | grep kubernetes): 21.7.0
Hi @alfredkrohmer , for kubernetes, events is a namespaced api resource, so it's impossible to create a cluster-scoped event. You can check it by executing "kubectl api-resources"
/assign @showjason Thanks!
@showjason the problem is not creating a cluster-scoped event (which I'm not trying to do) but to refer to cluster-scoped resources in the involvedObject section of the event.
@alfredkrohmer sorry, I misunderstood your point, I tried to reproduce this case, I think this line event_time=datetime.datetime.utcnow().isoformat(timespec='microseconds') + 'Z',in your code should be removed, the event will be created successfully!
BTW,If you are still confused why this happened, please refer to this disscussion
@showjason No this is not the issue, the event_time hack is due to #730.
Please try to understand my actual issue description.
Could you enable debug logging and share the log (the request body)?
send: b'GET /api/v1/nodes HTTP/1.1\r\nHost: 127.0.0.1:43987\r\nAccept-Encoding: identity\r\nAccept: application/json\r\nUser-Agent: OpenAPI-Generator/21.7.0/python\r\nContent-Type: application/json\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Cache-Control: no-cache, private
header: Content-Type: application/json
header: X-Kubernetes-Pf-Flowschema-Uid: 10fde20a-8235-418b-a172-33bab220f494
header: X-Kubernetes-Pf-Prioritylevel-Uid: 9504a57a-6e78-4981-a60d-8e736eab57e5
header: Date: Wed, 06 Apr 2022 10:11:38 GMT
header: Transfer-Encoding: chunked
Traceback (most recent call last):
File "/home/alfredkr/test.py", line 19, in <module>
v1.create_namespaced_event('default', client.CoreV1Event(
File "/home/alfredkr/.local/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 6906, in create_namespaced_event
return self.create_namespaced_event_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/home/alfredkr/.local/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 7001, in create_namespaced_event_with_http_info
return self.api_client.call_api(
File "/home/alfredkr/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/home/alfredkr/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/home/alfredkr/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 391, in request
return self.rest_client.POST(url,
File "/home/alfredkr/.local/lib/python3.9/site-packages/kubernetes/client/rest.py", line 275, in POST
return self.request("POST", url,
File "/home/alfredkr/.local/lib/python3.9/site-packages/kubernetes/client/rest.py", line 234, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (422)
Reason: Unprocessable Entity
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '10fde20a-8235-418b-a172-33bab220f494', 'X-Kubernetes-Pf-Prioritylevel-Uid': '9504a57a-6e78-4981-a60d-8e736eab57e5', 'Date': 'Wed, 06 Apr 2022 10:11:38 GMT', 'Content-Length': '438'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Event \"kind-control-plane.nb8dr\" is invalid: involvedObject.namespace: Invalid value: \"\": does not match event.namespace","reason":"Invalid","details":{"name":"kind-control-plane.nb8dr","kind":"Event","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"\": does not match event.namespace","field":"involvedObject.namespace"}]},"code":422}
https://github.com/kubernetes-client/python/issues/1682#issuecomment-1090095578 I thought it would print the request body. Was it printed before the first line?
No, that's the whole output
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Closing due to inactivity. Please feel free to reopen if you have any questions /close
@roycaihw: Closing this issue.
In response to this:
Closing due to inactivity. Please feel free to reopen if you have any questions /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What do you mean with inactivity? I described the problem precisely and provided as much input as possible and requested. Why was this closed?
/reopen
@alfredkrohmer: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.