c
c copied to clipboard
EventsV1API_patchNamespacedEvent returns error - cannot unmarshal object into Go value of type jsonpatch.Patch
Hello,
Hope you are well.
I am trying to patch the event with event_series structure. This is the code I am having,
// Update event
cJSON *jsonObj = events_v1_event_series_convertToJSON(evt_v1->series);
object_t *patchBody = object_parseFromJSON(jsonObj);
int forceUpdate = 0;
EventsV1API_patchNamespacedEvent(qEntry_proc->apiClient, qEntry_proc->evt_v1->metadata->name, qEntry_proc->_namespace, patchBody, NULL, NULL, NULL, NULL, forceUpdate);`
But the after apiClient_invoke, I am seeing this error in cJSON_Parse(),
'cannot unmarshal object into Go value of type jsonpatch.Patch'
Looks like, the patch body that I am passing to EventsV1API_patchNamespacedEvent() is not in the proper format as jsonpatch.Patch. On investigating this further, I see that the jsonpatch.Patch should look like,
[ { "op": "replace", "path": "/baz", "value": "boo" }, ] for an example json { "baz": "qux", "foo": "bar" }
Any idea, how should I transform the below event series json buffer into jsonpatch format,
"{\n\t"count":\t1,\n\t"lastObservedTime":\t"2022-10-21T22:51:30.000000Z"\n}"
If I'm wrong, please correct me as to what is the correct usage for EventsV1API_patchNamespacedEvent().
Thank you very much!
Can you do it manully first with kubectl patch ? During this process you need create a patch yaml/xml file. And finally, convert your patch file to json string using cJSON library.
Hi @ityuhui , I also met the same issue when I used CoreV1API_patchNodeStatus. I think the root causes of the 2 APIs are the same. As below code shows, more than one "HeaderType" and "ContentType" are added to the HTTP header.
list_addElement(localVarHeaderType,"application/json"); //produces
list_addElement(localVarHeaderType,"application/yaml"); //produces
list_addElement(localVarHeaderType,"application/vnd.kubernetes.protobuf"); //produces
list_addElement(localVarContentType,"application/json-patch+json"); //consumes
list_addElement(localVarContentType,"application/merge-patch+json"); //consumes
list_addElement(localVarContentType,"application/strategic-merge-patch+json"); //consumes
list_addElement(localVarContentType,"application/apply-patch+yaml"); //consumes
But according the RFC: https://www.rfc-editor.org/rfc/rfc7230#section-3.2.2
A sender MUST NOT generate multiple header fields with the same field name in a message unless either the entire field value for that header field is defined as a comma-separated list [i.e., #(values)] or the header field is a well-known exception (as noted below).
So I changed above code a bit as below
list_addElement(localVarHeaderType,"application/json"); //produces
// list_addElement(localVarHeaderType,"application/yaml"); //produces
// list_addElement(localVarHeaderType,"application/vnd.kubernetes.protobuf"); //produces
// list_addElement(localVarContentType,"application/json-patch+json"); //consumes
// list_addElement(localVarContentType,"application/merge-patch+json"); //consumes
list_addElement(localVarContentType,"application/strategic-merge-patch+json"); //consumes
// list_addElement(localVarContentType,"application/apply-patch+yaml"); //consumes
It works for me. Is it possible to add arguments for these patch-related API to configure "HeaderType" and "ContentType".
@smuraligit, you may have a try to see whether it can resolve your problem.
Thanks!
Thank you. I'll take a look soon.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.